claim
Zhang et al. (2023) found that grounded pretraining strengthens output alignment with real-world facts in AI systems.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (1)
- artificial intelligence concept