reference
Luong et al. (2024) presented a method for the realistic evaluation of toxicity in large language models in their paper 'Realistic evaluation of toxicity in large language models', published in the Findings of the Association for Computational Linguistics: ACL 2024.
Authors
Sources
- A Survey of Incorporating Psychological Theories in LLMs - arXiv arxiv.org via serper
Referenced by nodes (3)
- Large Language Models concept
- Association for Computational Linguistics entity
- toxicity concept