claim
Model-intrinsic hallucinations occur due to limitations in training data, architectural biases, or inference-time sampling strategies, even when well-organized prompts are used, as noted by Bang and Madotto (2023), OpenAI (2023a), and Chen et al. (2023).
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (2)
- OpenAI entity
- training data concept