claim
Conceptual hallucinations in Large Language Models (LLMs) can lead to false positives, which is an undesirable feature in non-transparent models that can compromise disambiguation tasks (Peng et al., 2022).
Authors
Sources
- Combining large language models with enterprise knowledge graphs www.frontiersin.org via serper
Referenced by nodes (1)
- Large Language Models concept