claim
Kalavasis et al. (2025) proved that providing access to negative examples allows Large Language Models to achieve consistent generation with breadth, serving as a mitigation strategy for hallucinations.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- Large Language Models concept