claim
Malicious actors can leverage social influence to undermine trust in digital spaces, highlighting the potential of inoculation theory to proactively guard against manipulative strategies in Large Language Models, as noted by Zeng et al. (2024a), Liu et al. (2025), and Ai et al. (2024b).
Authors
Sources
- A Survey of Incorporating Psychological Theories in LLMs - arXiv arxiv.org via serper
Referenced by nodes (3)
- Large Language Models concept
- trust concept
- social influence theory concept