Relations (1)

related 1.00 — strongly supporting 1 fact

Hallucination is a phenomenon associated with machine learning, as Large Language Models are described as brittle machine learning models prone to generating inaccurate responses [1].

Facts (1)

Sources
Benchmarking Hallucination Detection Methods in RAG - Cleanlab cleanlab.ai Cleanlab 1 fact
claimLarge Language Models (LLMs) are prone to hallucination because they are fundamentally brittle machine learning models that may fail to generate accurate responses even when the retrieved context contains the correct answer, particularly when reasoning across different facts is required.