claim
Small language models tend to produce hallucinations that are obviously wrong or awkwardly phrased, making them easier to detect.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (2)
- hallucination concept
- small language models concept