claim
The academic community has bifurcated research on Large Language Model evaluation tools into two main areas: a critical re-examination of the validity of traditional, static benchmarks and a rigorous investigation into the reliability and biases of the emerging 'LLM-as-a-Judge' paradigm.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- LLM-as-a-judge concept