Relations (1)
related 2.32 — strongly supporting 4 facts
Large Language Models are the primary subject of the clinical safety evaluation framework proposed in [1], which was developed to assess their performance in medical contexts as described in [2] and [3]. This framework is the focus of the research article published in npj Digital Medicine regarding the safety and hallucination rates of these models [4].
Facts (4)
Sources
A framework to assess clinical safety and hallucination rates of LLMs ... nature.com 4 facts
claimD.P., E.A., M.D., N.M., S.K., and J.B. contributed to the concept, design, and execution of the study regarding clinical safety and hallucination rates of LLMs.
referenceThe article titled 'A framework to assess clinical safety and hallucination rates of LLMs for medical text summarisation' was published in the journal npj Digital Medicine (volume 8, article 274) in 2025, authored by E. Asgari, N. Montaña-Brown, M. Dubois, and others.
claimThe CREOLA platform was built by M.D. and S.K. to facilitate clinical safety and hallucination rate assessments in LLMs.
claimThe authors propose a framework for assessing clinical safety and hallucination rates in large language models (LLMs) that includes an error taxonomy for classifying outputs, an experimental structure for iterative comparisons in document generation pipelines, a clinical safety framework to evaluate error harms, and a graphical user interface named CREOLA.