Learning to Inject: Automated Prompt Injection via Reinforcement Learning

Xin Chen, Jie Zhang and Florian Tramèr



Abstract

Prompt injection is one of the most critical vulnerabilities in LLM agents; yet, effective automated attacks remain largely unexplored from an optimization perspective. Existing methods heavily depend on human red-teamers and hand-crafted prompts, limiting their scalability and adaptability. We propose AutoInject, a reinforcement learning framework that generates universal, transferable adversarial suffixes while jointly optimizing for attack success and utility preservation on benign tasks. Our black-box method supports both query-based optimization and transfer attacks to unseen models and tasks. Using only a 1.5B parameter adversarial suffix generator, we successfully compromise frontier systems including GPT 5 Nano, Claude Sonnet 3.5, and Gemini 2.5 Flash on the AgentDojo benchmark, establishing a stronger baseline for automated prompt injection research.


BibTeX
@misc{CZT26,
  author   =   {Chen, Xin and Zhang, Jie and Tram{\`e}r, Florian},
  title   =   {Learning to Inject: Automated Prompt Injection via Reinforcement Learning},
  year   =   {2026},
  howpublished   =   {arXiv preprint arXiv:2602.05746},
  url   =   {https://arxiv.org/abs/2602.05746}
}