claim
The authors propose a framework for assessing clinical safety and hallucination rates in large language models (LLMs) that includes an error taxonomy for classifying outputs, an experimental structure for iterative comparisons in document generation pipelines, a clinical safety framework to evaluate error harms, and a graphical user interface named CREOLA.
Authors
Sources
- A framework to assess clinical safety and hallucination rates of LLMs ... www.nature.com via serper
Referenced by nodes (4)
- Large Language Models concept
- hallucination rate concept
- Creole concept
- clinical safety evaluation framework concept