claim
Scaling up large language models increases the fluency and coherence of generated text, which makes hallucinations more convincing and harder to detect.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (3)
- Large Language Models concept
- hallucination concept
- coherence concept