measurement
Respondents reported using the following strategies to address AI hallucinations: consulting colleagues or experts (12), ignoring erroneous outputs (11), ceasing use of the AI/LLM (11), directly informing the model of its mistake (1), updating the prompt (1), relying on known correct answers (1), and examining underlying code (1).

Authors

Sources

Referenced by nodes (2)