claim
Prompt injection or adversarial prompting can override the attention of Large Language Models to previous instructions and force them to act on the current prompt, an issue that has affected GPT-3 (Branch et al. 2022).
Authors
Sources
- Building Trustworthy NeuroSymbolic AI Systems - arXiv arxiv.org via serper
Referenced by nodes (2)
- Large Language Models concept
- GPT-3 concept