Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems

Chawin Sitawarin, Florian Tramèr and Nicholas Carlini

International Conference on Machine Learning (ICML) 2023



Abstract

Decision-based adversarial attacks construct inputs that fool a machine-learning model into making targeted mispredictions by making only hard-label queries. For the most part, these attacks have been applied directly to isolated neural network models. However, in practice, machine learning models are just a component of a much larger system. By adding just a single preprocessor in front of a classifier, we find that state-of-the-art query-based attacks are as much as seven times less effective at attacking a prediction pipeline than attacking the machine learning model alone. Hence, attacks that are unaware of this invariance inevitably waste a large number of queries to re-discover or overcome it. We, therefore, develop techniques to first reverse-engineer the preprocessor and then use this extracted information to attack the end-to-end system. Our extraction method requires only a few hundred queries to learn the preprocessors used by most publicly available model pipelines, and our preprocessor-aware attacks recover the same efficacy as just attacking the model alone.


BibTeX
@inproceedings{STC23,
  author   =   {Sitawarin, Chawin and Tram{\`e}r, Florian and Carlini, Nicholas},
  title   =   {Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems},
  booktitle   =   {International Conference on Machine Learning (ICML)},
  year   =   {2023},
  howpublished   =   {arXiv preprint arXiv:2210.03297},
  url   =   {https://arxiv.org/abs/2210.03297}
}