reference
Llama Guard is an open-source safety classifier released by Meta that developers can use to filter out potentially harmful or unsafe content generated by AI models.
Authors
Sources
- How Open-Source AI Drives Responsible Innovation - The Atlantic www.theatlantic.com via serper