claim
Hallucinations in Large Language Models create risks for misinformation, reduced user trust, and accountability gaps (Bommasani et al., 2021; Weidinger et al., 2022).
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept