claim
Hallucination in Large Language Models refers to outputs that appear fluent and coherent but are factually incorrect, logically inconsistent, or entirely fabricated.

Authors

Sources

Referenced by nodes (2)