concept

AI hallucinations

Also known as: AI hallucinations

Facts (32)

Sources
Medical Hallucination in Foundation Models and Their ... medrxiv.org medRxiv Mar 3, 2025 15 facts
procedureThe survey conducted by the authors of 'Medical Hallucination in Foundation Models and Their ...' includes questions asking respondents to rate their trust in AI/LLM answers, the frequency of correctness of AI/LLM answers, and the frequency of encountering AI hallucinations on a scale of 1 to 5.
claimUncertainty-based hallucination detection methods assume that AI model hallucinations occur when a model lacks confidence in its generated outputs.
claimMcDermott et al. (2024) state that AI hallucinations disrupt clinical workflow efficiency by forcing clinicians to verify or correct AI-generated information, which adds to their workload and diverts attention from direct patient care.
measurementIn a survey of 59 respondents regarding the impact of AI hallucinations on a 1–5 scale, 21 respondents rated the impact as moderate (3), 22 rated it as low (2), 9 saw no impact (1), 5 observed high impact (4), and 2 reported a very high impact (5).
claimDetection methods for AI hallucinations are categorized into three groups: factual verification, summary consistency verification, and uncertainty-based hallucination detection.
claim37 survey respondents reported encountering AI hallucinations, which are instances where the AI generates plausible but incorrect information.
claimOnly 6 out of 59 survey participants believe there is more than a medium severity of the impact of AI hallucinations on their daily work, noting that AI has not yet fully merged into clinical workflow.
measurementSurvey respondents reported encountering AI hallucinations across various tasks: literature reviews (38 mentions), data analysis (25), research paper drafting (16), patient diagnostics (15), treatment recommendations (13), solving board exams (7), patient communication (5), EHR summaries (4), grant writing (4), insurance billing (2), and citations (1).
claimRecommendations to safeguard against AI hallucinations in healthcare include manual cross-checking and verification, human supervision and expert review, confidence scoring or indicators, improving model architecture and training, training and education on AI limitations, and establishing ethical guidelines and standards.
claimMedical annotation for evaluating AI hallucinations is often limited by the time doctors have to assess each AI-generated output, making it difficult to distinguish between clear AI hallucinations and potentially useful, unconventional diagnoses.
procedureTo address AI hallucinations, 85% (51) of survey respondents cross-reference with external sources, while others consult colleagues or experts (12), ignore erroneous outputs (11), cease use of the AI/LLM (11), inform the model of its mistake (1), update the prompt (1), rely on known correct answers (1), or examine underlying code (1).
claim50 out of 59 survey participants believe that the AI hallucinations they experienced or observed might impact patient health.
claimPotential negative impacts of AI hallucinations on clinical care include omitting crucial patient information for diagnosis or treatment, offering irrelevant answers, providing outdated information, containing false or misleading information, leading to misdiagnosis, exaggerating clinical findings, failing to account for time-sensitive information, making hasty decisions, suggesting treatments that do not follow current guidelines, suggesting fatal treatments, presenting false chronological order, performing incorrect mathematical calculations, or citing unknown evidence.
measurementRespondents identified insufficient training data (31 mentions) and biased training data (31 mentions) as the most frequently cited causes of AI hallucinations, followed by limitations in model architecture (30), lack of real-world context (26), overconfidence in AI-generated responses (24), and inadequate transparency of AI decision-making (14).
claimPerceived causes of AI hallucinations include insufficient training data, biased training data, limitations in AI model architecture, lack of real-world context, overconfidence in AI-generated responses, and inadequate transparency of AI decision-making.
On Hallucinations in Artificial Intelligence–Generated Content ... jnm.snmjournals.org The Journal of Nuclear Medicine 5 facts
procedureRecommended methods for detecting and evaluating AI hallucinations in Nuclear Medicine Imaging (NMI) include image-level comparisons, datasetwise statistical analysis, clinical task–based assessment by human or model observers, and the use of automated hallucination detectors trained on annotated benchmark datasets.
claimHallucinations in Artificial Intelligence–Generated Content (AIGC) for Nuclear Medicine Imaging (NMI) are typically subtle and deceptive, manifesting as added small abnormalities or realistic-looking lesions that do not exist in reality.
claimIn the context of Nuclear Medicine Imaging (NMI), the authors of 'On Hallucinations in Artificial Intelligence–Generated Content' define AI hallucinations as AI-fabricated abnormalities or artifacts that appear visually realistic and highly plausible yet are factually false and deviate from anatomic or functional truth.
claimThe authors of 'On Hallucinations in Artificial Intelligence–Generated Content' distinguish AI hallucinations from other AI-induced errors in Nuclear Medicine Imaging (NMI), such as the omission of real lesions or pure quantification bias, classifying these other errors as 'illusions' rather than hallucinations.
claimEffective mitigation of AI hallucinations in Nuclear Medicine Imaging (NMI) requires a comprehensive approach that encompasses data quality, learning paradigms, and model design.
Context Graph vs Knowledge Graph: Key Differences for AI - Atlan atlan.com Atlan Jan 27, 2026 4 facts
claimAI hallucinations are defined as instances where large language models generate responses that are plausible but factually incorrect.
claimContext graphs reduce AI hallucinations through token-efficient context engineering, which optimizes information delivery via relevance ranking, confidence-based filtering, and hierarchical summarization based on query complexity.
claimContext graphs reduce AI hallucinations through reasoning chains with explainable paths, allowing each AI response to be traced back through the graph to the specific entities, relationships, and policies that informed the output.
claimContext graphs reduce AI hallucinations through graph-grounded retrieval with operational filters, which allows systems to retrieve operational context such as data lineage, governing policies, quality signals, and ownership metadata alongside semantic relationships.
EdinburghNLP/awesome-hallucination-detection - GitHub github.com GitHub 2 facts
claimThe ACL 2025 research categorizes AI hallucinations into four distinct types: non-hallucination, cross-lingual only, cross-modal only, and joint cross-lingual/cross-modal.
claimThe 'Monitoring Decoding' framework mitigates AI hallucinations by evaluating the factuality of partial responses during generation, rather than post-generation, to prevent the snowballing effect where incorrect tokens force subsequent fabrications.
Medical Hallucination in Foundation Models and Their Impact on ... medrxiv.org medRxiv Nov 2, 2025 2 facts
measurementTo safeguard against AI hallucinations, survey respondents recommended manual cross-checking and verification (10 mentions), human supervision and expert review (8), confidence scoring or indicators (5), improving model architecture and training (5), training and education on AI limitations (4), and establishing ethical guidelines and standards (3).
claimGenerative AI systems pose unique safety risks because they can generate plausible but incorrect information, a phenomenon demonstrated in the analysis of state-of-the-art systems.
Overcoming the limitations of Knowledge Graphs for Decision ... xpertrule.com XpertRule 1 fact
claimKnowledge graphs reduce AI hallucinations and improve natural language understanding by providing necessary context to AI models.
Reference Hallucination Score for Medical Artificial ... medinform.jmir.org JMIR Medical Informatics Jul 31, 2024 1 fact
referenceZhou J, Zhang J, Wan R, Cui X, Liu Q, Guo H, Shi X, Fu B, Meng J, Yue B, Zhang Y, and Zhang Z authored 'Integrating AI into clinical education: evaluating general practice trainees’ proficiency in distinguishing AI-generated hallucinations and impacting factors', published in BMC Medical Education in 2025.
Enterprise AI Requires the Fusion of LLM and Knowledge Graph stardog.com Stardog Dec 4, 2024 1 fact
measurementA Morgan Stanley survey regarding enterprise adoption of AI found that 25% of respondents are worried about reputational damage, which is linked to concerns about AI hallucinations.
LLM Observability: How to Monitor AI When It Thinks in Tokens | TTMS ttms.com TTMS Feb 10, 2026 1 fact
claimAI hallucinations, where an AI assistant invents policy details or cites non-existent studies, can mislead users or produce incorrect business outputs.