claim
Giskard announced the Phare (Potential Harm Assessment & Risk Evaluation) benchmark in February, which is designed to evaluate the safety and security of leading large language models across four domains: hallucination, bias and fairness, harmfulness, and vulnerability to intentional abuse.
Authors
Sources
- Phare LLM Benchmark: an analysis of hallucination in ... www.giskard.ai via serper
Referenced by nodes (1)
- Large Language Models concept