claim
Knowledge gaps cause hallucinations because training cutoffs, tail entity under-representation, restricted access to specialized domains, and the absence of a symbolic world model mean that many factual questions fall outside the model's reliable knowledge boundary, yet the model cannot reliably identify when it is operating outside that boundary.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (2)
- hallucination concept
- knowledge gaps concept