claim
Prompting Large Language Models to adopt specific social identities can reduce bias, as demonstrated by Dong et al. (2024a), and mirror human-like ingroup favoritism, as demonstrated by Hu et al. (2025a).
Authors
Sources
- A Survey of Incorporating Psychological Theories in LLMs - arXiv arxiv.org via serper
Referenced by nodes (2)
- Large Language Models concept
- bias concept