claim
Large language models have a tendency to hallucinate, which is defined as making assertions that sound plausible but are factually inaccurate.
Authors
Sources
- New tool, dataset help detect hallucinations in large language models www.amazon.science via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept