claim
Instruction-tuned models can still hallucinate, especially on long-context, ambiguous, or factual-recall tasks, as revealed by studies from OpenAI (2023a) and Bang and Madotto (2023).
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (2)
- hallucination concept
- OpenAI entity