claim
Conceptual hallucinations in Large Language Models (LLMs) can lead to false positives, which is an undesirable feature in non-transparent models that can compromise disambiguation tasks (Peng et al., 2022).

Authors

Sources

Referenced by nodes (1)