reference
The research paper 'What features in prompts jailbreak LLMs? Investigating the mechanisms behind attacks' explores the mechanisms behind adversarial attacks on large language models.
Authors
Sources
- A Survey on the Theory and Mechanism of Large Language Models arxiv.org via serper
Referenced by nodes (1)
- adversarial attack concept