claim
A hallucination in an LLM-based GPT occurs when the model generates a response that appears realistic but is nonfactual, nonsensical, or inconsistent with the provided input.

Authors

Sources

Referenced by nodes (1)