claim
Large Language Models (LLMs) sometimes generate information that conflicts with existing sources (intrinsic hallucination) or cannot be verified (extrinsic hallucination).

Authors

Sources

Referenced by nodes (2)