claim
The Evaluation Stage of Large Language Models faces a significant open challenge in advancing from empirical evaluation via benchmarks to providing formal guarantees of model behavior, such as proving a model will not hallucinate or leak sensitive information under specific conditions.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (2)
- hallucination concept
- benchmarks concept