claim
Kalavasis et al. (2025) proved that providing access to negative examples allows Large Language Models to achieve consistent generation with breadth, serving as a mitigation strategy for hallucinations.

Authors

Sources

Referenced by nodes (1)