claim
The TruthfulQA benchmark evaluates large language models' susceptibility to factual hallucinations by presenting questions designed to provoke common misconceptions.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (1)
- TruthfulQA concept