claim
Instruction-following datasets used for supervised finetuning often have thin coverage of rare query types, meaning models receive little practice on the specific queries where they are most likely to hallucinate.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (2)
- Large Language Models concept
- supervised fine-tuning concept