Relations (1)

related 2.32 — strongly supporting 4 facts

Hallucination is a critical performance challenge inherent to foundation models, arising from their autoregressive training objectives [1] and persisting despite mitigation techniques like CoT [2]. Researchers are actively working to quantify these errors [3], which are frequently linked to reasoning failures within the models [4].

Facts (4)

Sources
Medical Hallucination in Foundation Models and Their Impact on ... medrxiv.org medRxiv 2 facts
claimFoundation models generate hallucinations because their autoregressive training objectives prioritize token-likelihood optimization over epistemic accuracy, leading to overconfidence and poorly calibrated uncertainty.
measurementPhysician audits confirmed that 64–72% of residual hallucinations in foundation models stemmed from causal or temporal reasoning failures rather than knowledge gaps.
Medical Hallucination in Foundation Models and Their ... medrxiv.org medRxiv 1 fact
claimInference techniques such as Chain-of-Thought (CoT) and Search Augmented Generation can effectively reduce hallucination rates in foundation models, though non-trivial levels of hallucination persist.
A Comprehensive Review of Neuro-symbolic AI for Robustness ... link.springer.com Springer 1 fact
referenceChrysos et al. identified quantifying uncertainty and hallucination in foundation models as the next frontier in reliable AI in their 2025 ICLR workshop proposal.