claim
Logical hallucinations in large language models involve internally inconsistent reasoning paths, such as claiming 'If a = b and b = c, then a≠c', despite the output being grammatically correct.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (1)
- Large Language Models concept