claim
Large Language Model hallucinations are defined as the generation of inaccurate or misleading content that may diverge from user intent, contradict established outputs, or conflict with verifiable factual knowledge.

Authors

Sources

Referenced by nodes (1)