claim
The integration of large language models into healthcare introduces risks to patient care, including the potential for hallucinated outputs to influence therapeutic choices, diagnostic pathways, and patient-provider communication, as noted by Topol (2019), Mehta and Devarakonda (2018), and Hata et al. (2022).

Authors

Sources

Referenced by nodes (2)