Relations (1)
related 0.10 — supporting 1 fact
The concept of hallucination is directly linked to GPT-3 because the model has been measured to produce these factually incorrect responses in approximately 15% of its outputs, as noted in [1].
Facts (1)
Sources
How Enterprise AI, powered by Knowledge Graphs, is ... blog.metaphacts.com 1 fact
measurementOpenAI found that the GPT-3 large language model produced hallucinations, defined as authoritative-sounding but factually incorrect or fabricated responses, approximately 15% of the time.