claim
Hallucinations in large language models (LLMs) are defined as outputs that are plausible but factually incorrect or made-up.

Authors

Sources

Referenced by nodes (2)