Relations (1)

related 2.00 — strongly supporting 3 facts

AI hallucinations are a specific phenomenon occurring within the operation of artificial intelligence, as evidenced by survey respondents encountering them in AI-generated content [1]. Procedures and safeguards have been developed specifically to mitigate these errors when using artificial intelligence systems [2], [3].

Facts (3)

Sources
Medical Hallucination in Foundation Models and Their ... medrxiv.org medRxiv 2 facts
claim37 survey respondents reported encountering AI hallucinations, which are instances where the AI generates plausible but incorrect information.
procedureTo address AI hallucinations, 85% (51) of survey respondents cross-reference with external sources, while others consult colleagues or experts (12), ignore erroneous outputs (11), cease use of the AI/LLM (11), inform the model of its mistake (1), update the prompt (1), rely on known correct answers (1), or examine underlying code (1).
Medical Hallucination in Foundation Models and Their Impact on ... medrxiv.org medRxiv 1 fact
measurementTo safeguard against AI hallucinations, survey respondents recommended manual cross-checking and verification (10 mentions), human supervision and expert review (8), confidence scoring or indicators (5), improving model architecture and training (5), training and education on AI limitations (4), and establishing ethical guidelines and standards (3).