claim
Hallucinations in Large Language Models occur when the probabilistic model incorrectly favors a hallucinatory output (yhalluc) over a factually correct response (yfact), representing a mismatch between the model's internal probability distributions and real-world factual distributions.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept