reference
Ziwei Xu, Sanjay Jain, and Mohan Kankanhalli argued in their 2024 paper 'Hallucination is Inevitable: An Innate Limitation of Large Language Models' that hallucinations are an inherent limitation of large language models.
Authors
Sources
- Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org via serper
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
- Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept