reference
Llama Guard is an open-source safety classifier released by Meta that developers can use to filter out potentially harmful or unsafe content generated by AI models.

Authors

Sources

Referenced by nodes (2)