claim
Prompt injection or adversarial prompting can override the attention of Large Language Models to previous instructions and force them to act on the current prompt, an issue that has affected GPT-3 (Branch et al. 2022).

Authors

Sources

Referenced by nodes (2)