Relations (1)

related 2.00 — strongly supporting 3 facts

Hallucinations are a central research topic within natural language processing, where they are defined as AI-generated content inconsistent with target data [1]. The field actively develops taxonomies to categorize these errors [2] and publishes academic research on the challenges they present [3].

Facts (3)

Sources
On Hallucinations in Artificial Intelligence–Generated Content ... jnm.snmjournals.org The Journal of Nuclear Medicine 1 fact
claimIn natural language processing, hallucinations are typically defined as artificial intelligence-generated content that is inconsistent with given targets.
A framework to assess clinical safety and hallucination rates of LLMs ... nature.com Nature 1 fact
claimTraditional natural language processing (NLP) taxonomies categorize hallucinations into distinct types such as 'intrinsic' and 'extrinsic,' 'factuality' and 'faithfulness,' or 'factual mirage' and 'silver lining,' whereas clinical taxonomies require higher granularity to capture specific clinical error types.
A Knowledge Graph-Based Hallucination Benchmark for Evaluating ... arxiv.org arXiv 1 fact
referenceThe paper 'An audit on the perspectives and challenges of hallucinations in NLP' was published in the Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing in Miami, Florida, USA, pp. 6528–6548.