claim
Model-intrinsic hallucinations occur due to limitations in training data, architectural biases, or inference-time sampling strategies, even when well-organized prompts are used, as noted by Bang and Madotto (2023), OpenAI (2023a), and Chen et al. (2023).

Authors

Sources

Referenced by nodes (2)