measurement
OpenAI found that the GPT-3 large language model produced hallucinations, defined as authoritative-sounding but factually incorrect or fabricated responses, approximately 15% of the time.

Authors

Sources

Referenced by nodes (3)