The Privacy Onion Effect: Memorization is Relative

Nicholas Carlini, Matthew Jagielski, Nicolas Papernot, Andreas Terzis, Florian Tramèr and Chiyuan Zhang   (alphabetical author ordering)

Conference on Neural Information Processing Systems (NeurIPS) 2022



Abstract

Machine learning models trained on private datasets have been shown to leak their private data. While recent work has found that the average data point is rarely leaked, the outlier samples are frequently subject to memorization and, consequently, privacy leakage. We demonstrate and analyse an Onion Effect of memorization: removing the "layer" of outlier points that are most vulnerable to a privacy attack exposes a new layer of previously-safe points to the same attack. We perform several experiments to study this effect, and understand why it occurs. The existence of this effect has various consequences. For example, it suggests that proposals to defend against memorization without training with rigorous privacy guarantees are unlikely to be effective. Further, it suggests that privacy-enhancing technologies such as machine unlearning could actually harm the privacy of other users.


BibTeX
@inproceedings{CJPT+22,
  author   =   {Carlini, Nicholas and Jagielski, Matthew and Papernot, Nicolas and Terzis, Andreas and Tram{\`e}r, Florian and Zhang, Chiyuan},
  title   =   {The Privacy Onion Effect: Memorization is Relative},
  booktitle   =   {Conference on Neural Information Processing Systems (NeurIPS)},
  year   =   {2022},
  howpublished   =   {arXiv preprint arXiv:2206.10469},
  url   =   {https://arxiv.org/abs/2206.10469}
}