claim
Instruction-tuning can teach large language models to express uncertainty with phrases like 'I'm not certain,' but this is learned as a surface pattern rather than a calibrated epistemic state.
Authors
Sources
- Hallucination Causes: Why Language Models Fabricate Facts mbrenndoerfer.com via serper
Referenced by nodes (2)
- Large Language Models concept
- instruction tuning concept