claim
The authors of the survey introduce an attribution framework that aims to solve the connection of prompting and model behavior to hallucinated text, noting that a single erroneous output may result from a combination of unclear prompting, model architectural biases, or training data limitations.
Authors
Sources
- Survey and analysis of hallucinations in large language models www.frontiersin.org via serper
Referenced by nodes (1)
- hallucination concept