measurement
Respondents reported using the following strategies to address AI hallucinations: consulting colleagues or experts (12), ignoring erroneous outputs (11), ceasing use of the AI/LLM (11), directly informing the model of its mistake (1), updating the prompt (1), relying on known correct answers (1), and examining underlying code (1).
Authors
Sources
- Medical Hallucination in Foundation Models and Their Impact on ... www.medrxiv.org via serper
Referenced by nodes (2)
- hallucination concept
- artificial intelligence concept