Relations (1)

related 3.17 — strongly supporting 8 facts

Hallucinations are defined as a specific phenomenon occurring within AI-generated content, where models produce factually incorrect or inconsistent outputs as described in [1], [2], and [3]. This relationship is further evidenced by the study of these errors across various domains, including nuclear medicine imaging and natural language processing, as detailed in [4], [5], and [6].

Facts (8)

Sources
On Hallucinations in Artificial Intelligence–Generated Content ... jnm.snmjournals.org The Journal of Nuclear Medicine 8 facts
claimHallucinations in artificial intelligence–generated content (AIGC) for nuclear medicine imaging (NMI) are defined as the generation of realistic yet factually incorrect content that can misrepresent anatomic and functional information.
referenceFarquhar et al. define confabulations as a subset of hallucinations where artificial intelligence-generated content is both incorrect and arbitrary, meaning the model outputs fluctuate unpredictably under identical inputs due to irrelevant factors like random seed variations.
claimThe DREAM report provides a comprehensive perspective on hallucinations in artificial intelligence–generated content (AIGC) for nuclear medicine imaging (NMI).
claimIn natural language processing, hallucinations are typically defined as artificial intelligence-generated content that is inconsistent with given targets.
claimThe medical imaging community currently lacks a domain-specific and systematic analysis of hallucinations in artificial intelligence–generated content (AIGC), unlike the natural language processing community which has recently explored this topic.
claimHallucinations in artificial intelligence–generated content (AIGC) used in nuclear medicine imaging (NMI) can lead to cascading clinical errors, including misdiagnosis, mistreatment, unnecessary interventions, medication errors, and ethical or legal concerns.
claimArtificial Intelligence-Generated Content (AIGC) in medical imaging can appear visually accurate but may contain hallucinations when compared against reference CT attenuation correction (AC) images.
claimHallucinations in artificial intelligence-generated content arise when the learned mapping function deviates from the true underlying mapping G.