claim
System prompts in LLMs act as behavior guides and repositories for sensitive information, and their leakage can expose underlying system weaknesses and improper security architectures.

Authors

Sources

Referenced by nodes (1)