claim
When large language models are asked about obscure entities, they often generate plausible-sounding facts based on the types of information typically associated with that entity category, even though the specific facts are not grounded in actual knowledge.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept