concept

foundation models

Also known as: foundation model

Facts (33)

Sources
Medical Hallucination in Foundation Models and Their ... medrxiv.org medRxiv Mar 3, 2025 10 facts
claimMedical hallucination is defined as any instance in which a foundation model generates misleading medical content.
claimThe authors of 'Medical Hallucination in Foundation Models and Their ...' contributed a taxonomy for understanding and addressing medical hallucinations, benchmarked models using a medical hallucination dataset and physician-annotated LLM responses to real medical cases, and conducted a multi-national clinician survey on experiences with medical hallucinations.
measurementExperimental evaluation on a medical hallucination benchmark indicates that Chain-of-Thought (CoT) prompting and Internet Search are effective techniques for reducing hallucination rates in Foundation Models.
claimThe authors define medical hallucination in Foundation Models as a distinct concept from general hallucinations, characterized by unique risks within the healthcare domain.
claimThe causes of medical hallucinations in Foundation Models are driven by data quality, model limitations, and healthcare domain complexities.
claimFoundation models, including Large Language Models (LLM) and Large Vision Language Models (VLM), are used in healthcare for clinical decision support, medical research, and improving healthcare quality and safety.
claimInference techniques such as Chain-of-Thought (CoT) and Search Augmented Generation can effectively reduce hallucination rates in foundation models, though non-trivial levels of hallucination persist.
claimMedical hallucinations in Foundation Models are categorized into a taxonomy ranging from factual inaccuracies to complex reasoning errors.
claimDetection and mitigation strategies for medical hallucinations in Foundation Models include factual verification, consistency checks, uncertainty quantification, and prompt engineering.
claimThe taxonomy of medical hallucinations in foundation models clusters errors into five main categories: factual errors, outdated references, spurious correlations, incomplete chains of reasoning, and fabricated sources or guidelines.
Medical Hallucination in Foundation Models and Their Impact on ... medrxiv.org medRxiv Nov 2, 2025 10 facts
claimThe authors of the study define medical hallucination as a reasoning-driven failure mode of foundation models that is distinct from general hallucinations in both its origin and clinical consequence.
claimThe authors of the study 'Medical Hallucination in Foundation Models and Their Impact on ...' define medical hallucination as any model-generated output that is factually incorrect, logically inconsistent, or unsupported by authoritative clinical evidence in ways that could alter clinical decisions.
claimThe study's empirical evaluation, utilizing a physician-audited benchmark, indicates that most medical hallucinations in foundation models stem from failures in causal and temporal reasoning rather than missing medical knowledge.
claimFoundation models generate hallucinations because their autoregressive training objectives prioritize token-likelihood optimization over epistemic accuracy, leading to overconfidence and poorly calibrated uncertainty.
measurementStructured prompting and retrieval-augmented generation can reduce medical hallucinations in foundation models by over 10%, according to the study's empirical evaluation.
measurementIn an evaluation of 11 foundation models (7 general-purpose, 4 medical-specialized) across seven medical hallucination tasks, general-purpose models achieved a median of 76.6% hallucination-free responses, while medical-specialized models achieved a median of 51.3%.
claimFoundation models are increasingly used in healthcare for clinical decision support, medical research, and health-system operations.
claimMedical hallucinations in foundation models manifest as misordered symptom progression, flawed diagnostic logic, or misplaced causal inference, and these errors persist even in large-scale models.
measurementPhysician audits confirmed that 64–72% of residual hallucinations in foundation models stemmed from causal or temporal reasoning failures rather than knowledge gaps.
claimThe study evaluated a diverse set of foundation models, including both general-purpose models and medical-purpose models designed or fine-tuned for healthcare applications, to assess medical hallucinations.
Survey and analysis of hallucinations in large language models frontiersin.org Frontiers Sep 29, 2025 2 facts
referenceBommasani, R., Hudson, D. A., Adeli, E., Altman, R., Arora, S., von Arx, S., et al. published 'On the opportunities and risks of foundation models' as an arXiv preprint (arXiv:2108.07258) in 2021.
referenceTouvron et al. (2023) introduced Llama 2, which consists of open foundation and fine-tuned chat models.
Understanding LLM Understanding skywritingspress.ca Skywritings Press Jun 14, 2024 2 facts
claimEmpirical neural scaling laws serve as an investment tool for predicting the scaling behaviors of foundation models, helping researchers choose methods that scale effectively with increased computation.
claimFoundation models are large-scale, self-supervised pre-trained models whose capabilities increase significantly with the scaling of training data, model size, and computational power.
A Comprehensive Review of Neuro-symbolic AI for Robustness ... link.springer.com Springer Dec 9, 2025 2 facts
procedureExperiments in neuro-symbolic AI should focus on integrating symbolic reasoning modules with foundation models to test how symbolic priors can guide large-scale inference more reliably.
referenceChrysos et al. identified quantifying uncertainty and hallucination in foundation models as the next frontier in reliable AI in their 2025 ICLR workshop proposal.
A Survey on the Theory and Mechanism of Large Language Models arxiv.org arXiv Mar 12, 2026 1 fact
referenceThe paper 'SFT memorizes, RL generalizes: a comparative study of foundation model post-training' was published in the Proceedings of the 42nd International Conference on Machine Learning, Vol. 267, pp. 10818–10838.
The Synergy of Symbolic and Connectionist AI in LLM-Empowered ... arxiv.org arXiv Jul 11, 2024 1 fact
claimAdvancements in Large Language Models (LLMs) and foundation models have catalyzed the integration of connectionist and symbolic AI paradigms.
Large Language Models Meet Knowledge Graphs for Question ... arxiv.org arXiv Sep 22, 2025 1 fact
referenceLi et al. (2024a) created M3SciQA, a multi-modal multi-document scientific question answering benchmark for evaluating foundation models, published in EMNLP (pages 15419–15446).
Designing Knowledge Graphs for AI Reasoning, Not Guesswork linkedin.com Piers Fawkes Β· LinkedIn Jan 14, 2026 1 fact
perspectiveIndustries and enterprises built on structured data have lagged in AI adoption, and unlocking progress requires foundation models built specifically for structured data.
Track: Poster Session 3 - aistats 2026 virtual.aistats.org Samuel Tesfazgi, Leonhard Sprandl, Sandra Hirche Β· AISTATS 1 fact
claimOut-of-Distribution (OOD) problems, defined as data discrepancies between training and testing environments, hinder the generalization of foundation models.
Bridging the Gap Between LLMs and Evolving Medical Knowledge arxiv.org arXiv Jun 29, 2025 1 fact
referenceNori et al. (2023) published 'Can generalist foundation models outcompete special-purpose tuning? case study in medicine' as an arXiv preprint (arXiv:2311.16452).
A survey on augmenting knowledge graphs (KGs) with large ... link.springer.com Springer Nov 4, 2024 1 fact
claimFoundation models are trained without specific instructions for their use cases, as exemplified by GPT-3.