claim
Instruction-tuned models can still hallucinate, especially on long-context, ambiguous, or factual-recall tasks, as revealed by studies from OpenAI (2023a) and Bang and Madotto (2023).

Authors

Sources

Referenced by nodes (2)