Relations (1)
related 4.81 — strongly supporting 27 facts
Artificial intelligence is related to hallucination because the latter is a well-documented phenomenon where AI systems, such as Large Language Models, generate factually incorrect or fabricated information [1], [2]. Research highlights that these hallucinations are a significant challenge in AI development, impacting fields like medical imaging and requiring specific mitigation strategies to ensure system reliability and user trust [3], [4], [5].
Facts (27)
Sources
On Hallucinations in Artificial Intelligence–Generated Content ... jnm.snmjournals.org 13 facts
claimThresholds for AI processing balance the extent of dose reduction with the risk of AI-induced hallucinations to ensure that improved visual quality does not come at the cost of inaccurate representations.
claimHallucinations in artificial intelligence–generated content for nuclear medicine imaging may arise from biased or nondeterministic data, the intrinsic probabilistic nature of deep learning, or limited visual feature understanding by models.
claimApplying the no-gold-standard evaluation method to AI-generated content faces two challenges: the assumed linearity between true and measured values may not hold for nonlinear generative models, and the metric may capture general errors rather than hallucinations specifically.
claimThere is disagreement in the research community regarding whether hallucinations are unique to artificial intelligence, with some studies defining hallucinations as false structures in reconstructed images regardless of origin, while others argue they are unique to artificial intelligence.
claimThe definition of hallucinations in artificial intelligence varies across publications, with no precise or universally accepted definition currently established.
imageFigure 5A in the source article illustrates that richer and more comprehensive training datasets effectively decrease hallucinated artifacts in AI models.
claimAI models trained primarily on healthy subjects may hallucinate features when applied to rare diseases due to extrapolation from biased or incomplete representations.
claimMitigation strategies for AI hallucinations must be tailored to specific causes, including data quality, training paradigms, and model architecture.
claimImproving the quality, quantity, and diversity of training data by incorporating a wider range of scanners, imaging protocols, and patient populations can reduce the risk of hallucinations in AI models.
claimMost AI models used in Nuclear Medicine Imaging (NMI) prioritize visual image quality using loss functions like mean squared error, which may produce visually high-quality outputs that do not improve downstream data quality and may introduce subtle errors and hallucinations.
claimExpert evaluation of AI-generated medical images often requires access to reference images, as even experienced readers may be misled by hallucinations without them.
procedureTo mitigate hallucinations caused by domain shift, developers should clearly define the intended scope and limitations of AI models to prevent inappropriate or unintended applications.
procedureRadiomics-based evaluation detects AI hallucinations by selecting clinically relevant regions of interest, extracting quantitative features from both AI-generated content and reference images, and performing statistical comparisons to identify inconsistencies.
Medical Hallucination in Foundation Models and Their Impact on ... medrxiv.org 3 facts
measurementRespondents reported using the following strategies to address AI hallucinations: consulting colleagues or experts (12), ignoring erroneous outputs (11), ceasing use of the AI/LLM (11), directly informing the model of its mistake (1), updating the prompt (1), relying on known correct answers (1), and examining underlying code (1).
measurementThe most common strategy for addressing AI hallucinations among respondents was cross-referencing with external sources, employed by 85% (51) of respondents.
measurementIn a survey of 59 participants, the most frequently cited factors contributing to AI hallucinations were insufficient training data (31 mentions), biased training data (31), limitations in model architecture (30), lack of real-world context (26), overconfidence in AI-generated responses (24), and inadequate transparency of AI decision-making (14).
Medical Hallucination in Foundation Models and Their ... medrxiv.org 3 facts
claimTreating AI systems as products, which would establish potential liability for systematic hallucinations or errors, is a proposed legal framework that faces challenges due to the ability of AI systems to evolve through continuous learning.
claimHallucinations in AI systems curtail the impact of precision medicine by reducing the trustworthiness of personalized treatment recommendations.
claimThe term 'hallucination' in AI lacks a universally accepted definition and encompasses diverse errors, which creates a fundamental challenge for standardizing benchmarks or evaluating detection methods (Huang et al., 2024).
LLM Hallucination Detection and Mitigation: State of the Art in 2026 zylos.ai 1 fact
perspectiveMitigation of hallucinations rather than complete elimination remains the realistic goal for AI systems.
LLM Hallucinations: Causes, Consequences, Prevention - LLMs llmmodels.org 1 fact
claimLarge Language Models (LLMs) are AI systems capable of generating human-like text, but they are susceptible to producing outputs that lack factual accuracy or coherence, a phenomenon known as hallucinations.
MedHallu: Benchmark for Medical LLM Hallucination Detection emergentmind.com 1 fact
claimThe MedHallu benchmark serves as a guiding post for developers and researchers aiming to minimize hallucinations and increase the safety of AI systems deployed in critical sectors like healthcare.
The Role of Hallucinations in Large Language Models - CloudThat cloudthat.com 1 fact
claimIn the context of artificial intelligence, hallucination refers to a large language model generating information that appears confident and fluent, but is factually incorrect, fabricated, or unverifiable.
Designing Knowledge Graphs for AI Reasoning, Not Guesswork linkedin.com 1 fact
claimAI systems often produce hallucinations because they are forced to infer connections from raw data, loosely related documents, or embeddings at runtime, rather than having that structure provided.
A framework to assess clinical safety and hallucination rates of LLMs ... nature.com 1 fact
referenceThe paper 'Truthful AI: Developing and governing AI that does not lie' (arXiv:2110.06674, 2021) explores the development and governance of AI systems to prevent dishonesty or hallucination.
Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org 1 fact
referenceZhang et al. (2023) authored the paper titled 'Siren’s Song in the AI Ocean: A Survey on Hallucination in Large Language Models', published as arXiv:2309.01219.
LLM Observability: How to Monitor AI When It Thinks in Tokens | TTMS ttms.com 1 fact
claimFrequent or egregious hallucinations and inaccuracies in AI systems can erode user trust and damage brand credibility.