claim
Hallucinations in large language models arise from both prompt-dependent factors and model-intrinsic factors, which requires the use of tailored mitigation approaches.

Authors

Sources

Referenced by nodes (2)