claim
Hallucination in Large Language Models refers to outputs that appear fluent and coherent but are factually incorrect, logically inconsistent, or entirely fabricated.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
- Hallucination in Large Language Models: What Is It and Why Is It ... medium.com via serper
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept