reference
CyberSecEval is a set of cybersecurity safety benchmarks included in Meta's open-source safety tools that helps developers understand and quantify the risks of large language models suggesting insecure code or being misused for malicious content or cyberattacks.
Authors
Sources
- How Open-Source AI Drives Responsible Innovation - The Atlantic www.theatlantic.com via serper
Referenced by nodes (3)
- Large Language Models concept
- Meta entity
- cyber attacks concept