claim
Large language models have a tendency to hallucinate, which is defined as making assertions that sound plausible but are factually inaccurate.

Authors

Sources

Referenced by nodes (2)