reference
TruthfulQA (Lin et al., 2022) is a benchmark that evaluates whether large language models produce answers that mimic human false beliefs.

Authors

Sources

Referenced by nodes (2)