Gradient-based Jailbreak Images for Multimodal Fusion Models

Javier Rando, Hannah Korevaar, Erik Brinkman, Ivan Evtimov and Florian Tramèr



Abstract

Augmenting language models with image inputs may enable more effective jailbreak attacks through continuous optimization, unlike text inputs that require discrete optimization. However, new multimodal fusion models tokenize all input modalities using non-differentiable functions, which hinders straightforward attacks. In this work, we introduce the notion of a tokenizer shortcut that approximates tokenization with a continuous function and enables continuous optimization. We use tokenizer shortcuts to create the first end-to-end gradient image attacks against multimodal fusion models. We evaluate our attacks on Chameleon models and obtain jailbreak images that elicit harmful information for 72.5% of prompts. Jailbreak images outperform text jailbreaks optimized with the same objective and require 3x lower compute budget to optimize 50x more input tokens. Finally, we find that representation engineering defenses, like Circuit Breakers, trained only on text attacks can effectively transfer to adversarial image inputs.


BibTeX
@misc{RKBE+24,
  author   =   {Rando, Javier and Korevaar, Hannah and Brinkman, Erik and Evtimov, Ivan and Tram{\`e}r, Florian},
  title   =   {Gradient-based Jailbreak Images for Multimodal Fusion Models},
  year   =   {2024},
  howpublished   =   {arXiv preprint arXiv:2410.03489},
  url   =   {https://arxiv.org/abs/2410.03489}
}