procedure
LLM-as-a-judge (also called Self-Evaluation) is an approach where a Large Language Model is directly asked to evaluate the correctness or confidence of its own generated response, often using a Likert-scale scoring prompt.

Authors

Sources

Referenced by nodes (1)