claim
Large Language Models can output unfactual or unfaithful text with high degrees of confidence, which poses significant risks in high-stakes environments like healthcare.
Authors
Sources
- A framework to assess clinical safety and hallucination rates of LLMs ... www.nature.com via serper
Referenced by nodes (2)
- Large Language Models concept
- health care concept