claim
DeepSeek-67B demonstrates strong internal consistency and confidence, but its hallucinations are primarily caused by internal factual gaps in its training data or architecture that prompt engineering cannot resolve.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (1)
- prompt engineering concept