reference
Zhang et al. (2023) found that grounded language model training reduces the occurrence of hallucinations.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (1)
- hallucination concept