claim
Efforts to mitigate hallucinations at the model level include supervised fine-tuning (SFT), reinforcement learning from human feedback (RLHF), contrastive decoding, and grounded pretraining.

Authors

Sources

Referenced by nodes (3)