claim
Large Language Models (LLMs) frequently produce inaccurate or unsupported information, a phenomenon commonly referred to as 'hallucination'.
Authors
Sources
- Integrating Knowledge Graphs into RAG-Based LLMs to Improve ... thesis.unipd.it via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept