claim
Large Language Models (LLMs) generate responses that can contain inconsistencies, which are referred to as hallucinations.
Authors
Sources
- A Knowledge-Graph Based LLM Hallucination Evaluation Framework www.researchgate.net via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept