claim
Integrative grounding is a task requiring Large Language Models to retrieve and verify multiple interdependent pieces of evidence for complex queries, which often results in the model hallucinating rationalizations using internal knowledge when external information is incomplete.
Authors
Sources
- EdinburghNLP/awesome-hallucination-detection - GitHub github.com via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept