claim
Large Language Models (LLMs) sometimes generate information that conflicts with existing sources (intrinsic hallucination) or cannot be verified (extrinsic hallucination).
Authors
Sources
- A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com via serper
Referenced by nodes (2)
- Large Language Models concept
- extrinsic hallucination concept