claim
The Hallucinations Leaderboard evaluates Large Language Models (LLMs) on their ability to handle various types of hallucinations to provide researchers and developers with insights into model reliability and efficiency.

Authors

Sources

Referenced by nodes (2)