claim
Large language models generate hallucinations when they produce outputs that are fictitious, incorrect despite sounding plausible, or inconsistent with the input prompt or grounding data.

Authors

Sources

Referenced by nodes (2)