claim
Factual hallucinations in large language models involve the generation of inaccurate or fabricated facts that do not align with real-world knowledge or external knowledge bases, such as incorrectly identifying Toronto as the capital of Canada.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (1)
- Large Language Models concept