Florian Tramèr, Nicolas Papernot, Ian Goodfellow, Dan Boneh and Patrick McDaniel
Adversarial examples are maliciously perturbed inputs designed to mislead machine learning (ML) models at test-time. They often transfer: the same adversarial example fools more than one model.
In this work, we propose novel methods for estimating the previously unknown dimensionality of the space of adversarial inputs. We find that adversarial examples span a contiguous subspace of large ( 25) dimensionality. Adversarial subspaces with higher dimensionality are more likely to intersect. We find that for two different models, a significant fraction of their subspaces is shared, thus enabling transferability.
In the first quantitative analysis of the similarity of different models’ decision boundaries, we show that these boundaries are actually close in arbitrary directions, whether adversarial or benign. We conclude by formally studying the limits of transferability. We derive (1) sufficient conditions on the data distribution that imply transferability for simple model classes and (2) examples of scenarios in which transfer does not occur. These findings indicate that it may be possible to design defenses against transfer-based attacks, even for models that are vulnerable to direct attacks.
@misc{TPGB+17, | |||
author | = | {Tram{\`e}r, Florian and Papernot, Nicolas and Goodfellow, Ian and Boneh, Dan and McDaniel, Patrick}, | |
title | = | {The Space of Transferable Adversarial Examples}, | |
year | = | {2017}, | |
howpublished | = | {arXiv preprint arXiv:1704.03453}, | |
url | = | {https://arxiv.org/abs/1704.03453} | |
} |