reference
TruthfulQA (Lin et al., 2022) is a benchmark that evaluates whether large language models produce answers that mimic human false beliefs.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (2)
- Large Language Models concept
- TruthfulQA concept