claim
Large Language Models (LLMs) are prone to hallucination because they are fundamentally brittle machine learning models that may fail to generate accurate responses even when the retrieved context contains the correct answer, particularly when reasoning across different facts is required.

Authors

Sources

Referenced by nodes (3)