claim
The use of web-scale and unfiltered pretraining data containing inconsistencies, biases, and outdated or false information can negatively affect large language models during training, as noted by Shuster et al. (2022), Chen et al. (2023), and Weidinger et al. (2022).
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (1)
- Large Language Models concept