claim
Large language models conditioned on external knowledge may still hallucinate because their generated output is not strictly limited to the accessed information.
Authors
Sources
- Grounding LLM Reasoning with Knowledge Graphs - arXiv arxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept