claim
Large Language Models (LLMs) have a tendency to produce inaccurate or unsupported information, a problem known as 'hallucination'.
Authors
Sources
- Integrating Knowledge Graphs into RAG-Based LLMs to Improve ... thesis.unipd.it via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept