Relations (1)
related 3.00 — strongly supporting 7 facts
Hallucinations are a critical challenge in generative artificial intelligence, caused by factors like domain shift [1] and statistical prior deviations [2]. Various strategies such as neurosymbolic AI [3] and Retrieval-Augmented Generation [4] are employed to mitigate these errors, while regulatory frameworks focus on measuring and managing them to ensure safe deployment [5], [6].
Facts (7)
Sources
On Hallucinations in Artificial Intelligence–Generated Content ... jnm.snmjournals.org 3 facts
claimOverrepresentation of specific patterns in training data, such as lesions frequently occurring in the liver, can cause generative AI models to erroneously hallucinate those features in test samples where they do not exist.
claimGenerative AI models rely on learned statistical priors, meaning any deviation between training and testing distributions can result in unpredictable outputs and increase the risk of hallucinations.
claimDomain shift, defined as a mismatch between the data distribution used for training and the data distribution used for testing, is a key contributor to hallucinations in generative AI models.
Practical GraphRAG: Making LLMs smarter with Knowledge Graphs youtube.com 1 fact
claimRetrieval-Augmented Generation (RAG) has become a standard architecture component for Generative AI (GenAI) applications to address hallucinations and integrate factual knowledge.
How Neurosymbolic AI Finds Growth That Others Cannot See hbr.org 1 fact
claimNeurosymbolic AI helps prevent hallucinations in generative AI systems by applying logical, rule-based constraints to the outputs generated by neural networks.
Automating hallucination detection with chain-of-thought reasoning amazon.science 1 fact
claimIdentifying and measuring hallucinations is essential for the safe use of generative AI.
Medical Hallucination in Foundation Models and Their ... medrxiv.org 1 fact
claimEffective regulatory frameworks for generative AI require a data-driven approach that quantifies and categorizes different types of hallucinations, establishes clear risk thresholds for clinical applications, and creates protocols for monitoring and reporting AI-related adverse events.