claim
Common hallucination patterns in large language models include inventing biographies for obscure academics, fabricating publications for minor authors, and generating incorrect details about small businesses or niche historical events.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (1)
- Large Language Models concept