reference
RealToxicityPrompts (Gehman et al., 2020) is a benchmark used to investigate how large language models hallucinate toxic or inappropriate content.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept