Relations (1)

cross_type 1.58 — strongly supporting 2 facts

OpenAI is directly linked to the concept of hallucination through research studies identifying the phenomenon in their instruction-tuned models [1] and specific measurements of GPT-3's tendency to produce factually incorrect responses [2].

Facts (2)

Sources
How Enterprise AI, powered by Knowledge Graphs, is ... blog.metaphacts.com metaphacts 1 fact
measurementOpenAI found that the GPT-3 large language model produced hallucinations, defined as authoritative-sounding but factually incorrect or fabricated responses, approximately 15% of the time.
Survey and analysis of hallucinations in large language models frontiersin.org Frontiers 1 fact
claimInstruction-tuned models can still hallucinate, especially on long-context, ambiguous, or factual-recall tasks, as revealed by studies from OpenAI (2023a) and Bang and Madotto (2023).