claim
Large Language Model (LLM) hallucination is defined as the generation of content that may not be related to the input prompt or confirmed knowledge sources, despite the output appearing linguistically coherent.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (1)
- hallucination concept