claim
Large Language Models (LLMs) generate responses that can contain inconsistencies, which are referred to as hallucinations.

Authors

Sources

Referenced by nodes (2)