claim
Large language models generate hallucinations when they produce outputs that are fictitious, incorrect despite sounding plausible, or inconsistent with the input prompt or grounding data.
Authors
Sources
- The Role of Hallucinations in Large Language Models - CloudThat www.cloudthat.com via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept