claim
The MedHallu benchmark defines hallucination in large language models as instances where a model produces information that is plausible but factually incorrect.

Authors

Sources

Referenced by nodes (3)