measurement
Of 61 survey respondents, 21 believed AI/LLM outputs were often correct, 18 stated they were sometimes correct, and 6 felt they were rarely correct.
Authors
Sources
- Medical Hallucination in Foundation Models and Their ... www.medrxiv.org via serper
- Medical Hallucination in Foundation Models and Their Impact on ... www.medrxiv.org via serper
Referenced by nodes (2)
- artificial intelligence concept
- AI/LLM tools concept