claim
OpenAI research suggests that large language models hallucinate because they are rewarded for guessing an answer even when they are uncertain, rather than being trained to state 'I don't know.'
Authors
Sources
- What Really Causes Hallucinations in LLMs? - AI Exploration Journey aiexpjourney.substack.com via serper
Referenced by nodes (2)
- Large Language Models concept
- OpenAI entity