claim
The TruthfulQA benchmark evaluates large language models' susceptibility to factual hallucinations by presenting questions designed to provoke common misconceptions.

Authors

Sources

Referenced by nodes (1)