claim
Zero-shot and few-shot prompting, popularized by GPT-3 (Brown et al., 2020), expose models to minimal task examples but tend to be prone to hallucination when the task is not explicitly structured.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (1)
- hallucination concept