reference
Orgad et al. (2024/2025) investigated the intrinsic representation of Large Language Model hallucinations in their work titled 'LLMs Know More Than They Show'.
Authors
Sources
- Re-evaluating Hallucination Detection in LLMs - arXiv arxiv.org via serper
Referenced by nodes (1)
- hallucination concept