claim
Grounded pretraining reduces hallucination during generation in large language models, though it requires significant data and compute resources.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (2)
- Large Language Models concept
- hallucination concept