reference
The paper 'Survey of hallucination in natural language generation' by Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung, published in ACM Computing Surveys in 2023, provides a comprehensive overview of hallucination phenomena in natural language generation systems.
Authors
Sources
- Detecting and Evaluating Medical Hallucinations in Large Vision ... arxiv.org via serper
- A framework to assess clinical safety and hallucination rates of LLMs ... www.nature.com via serper
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
- Awesome-Hallucination-Detection-and-Mitigation - GitHub github.com via serper
Referenced by nodes (1)
- natural language generation concept