Relations (1)
Facts (3)
Sources
Medical Hallucination in Foundation Models and Their ... medrxiv.org 2 facts
claimWillems et al. (2023) report that repeated hallucinations in AI systems breed skepticism among healthcare providers and patients, which inhibits the broader integration of these tools in clinical practice.
claimDetermining liability for AI-generated incorrect or misleading information is complex and potentially involves AI developers, healthcare providers using the system, and healthcare institutions implementing the technology, according to Bottomley and Thaldar (2023).
Medical Hallucination in Foundation Models and Their Impact on ... medrxiv.org 1 fact
claimDetermining liability for incorrect or misleading information generated by AI systems is complex because it potentially involves AI developers, healthcare providers, and healthcare institutions [24].