Are aligned neural networks adversarially aligned?

Nicholas Carlini, Milad Nasr, Christopher A. Choquette-Choo, Matthew Jagielski, Irena Gao, Anas Awadalla, Pang Wei Koh, Daphne Ippolito, Katherine Lee, Florian Tramèr and Ludwig Schmidt

Conference on Neural Information Processing Systems (NeurIPS) 2023



Abstract

Large language models are now tuned to align with the goals of their creators, namely to be "helpful and harmless." These models should respond helpfully to user questions, but refuse to answer requests that could cause harm. However, adversarial users can construct inputs which circumvent attempts at alignment. In this work, we study to what extent these models remain aligned, even when interacting with an adversarial user who constructs worst-case inputs (adversarial examples). These inputs are designed to cause the model to emit harmful content that would otherwise be prohibited. We show that existing NLP-based optimization attacks are insufficiently powerful to reliably attack aligned text models: even when current NLP-based attacks fail, we can find adversarial inputs with brute force. As a result, the failure of current attacks should not be seen as proof that aligned text models remain aligned under adversarial inputs.
However the recent trend in large-scale ML models is multimodal models that allow users to provide images that influence the text that is generated. We show these models can be easily attacked, i.e., induced to perform arbitrary un-aligned behavior through adversarial perturbation of the input image. We conjecture that improved NLP attacks may demonstrate this same level of adversarial control over text-only models.


BibTeX
@inproceedings{CNCJ+23,
  author   =   {Carlini, Nicholas and Nasr, Milad and Choquette-Choo, Christopher A. and Jagielski, Matthew and Gao, Irena and Awadalla, Anas and Koh, Pang Wei and Ippolito, Daphne and Lee, Katherine and Tram{\`e}r, Florian and Schmidt, Ludwig},
  title   =   {Are aligned neural networks adversarially aligned?},
  booktitle   =   {Conference on Neural Information Processing Systems (NeurIPS)},
  year   =   {2023},
  howpublished   =   {arXiv preprint arXiv:2306.15447},
  url   =   {https://arxiv.org/abs/2306.15447}
}