claim
Large Language Models (LLMs) exhibit systematic errors known as medical hallucinations, where the models generate incorrect or misleading medical information that can adversely affect clinical decision-making and patient outcomes.

Authors

Sources

Referenced by nodes (3)