Students Parrot Their Teachers: Membership Inference on Model Distillation

Matthew Jagielski, Milad Nasr, Christopher Choquette-Choo, Katherine Lee, Nicholas Carlini and Florian Tramèr

Conference on Neural Information Processing Systems (NeurIPS) 2023 (Oral Presentation)



Abstract

Model distillation is frequently proposed as a technique to reduce the privacy leakage of machine learning. These empirical privacy defenses rely on the intuition that distilled “student” models protect the privacy of training data, as they only interact with this data indirectly through a “teacher” model. In this work, we design membership inference attacks to systematically study the privacy provided by knowledge distillation to both the teacher and student training sets. Our new attacks show that distillation alone provides only limited privacy across a number of domains. We explain the success of our attacks on distillation by showing that membership inference attacks on a private dataset can succeed even if the target model is *never* queried on any actual training points, but only on inputs whose predictions are highly influenced by training data. Finally, we show that our attacks are strongest when student and teacher sets are similar, or when the attacker can poison the teacher set.


BibTeX
@inproceedings{JNCL+23,
  author   =   {Jagielski, Matthew and Nasr, Milad and Choquette-Choo, Christopher and Lee, Katherine and Carlini, Nicholas and Tram{\`e}r, Florian},
  title   =   {Students Parrot Their Teachers: Membership Inference on Model Distillation},
  booktitle   =   {Conference on Neural Information Processing Systems (NeurIPS)},
  year   =   {2023},
  howpublished   =   {arXiv preprint arXiv:2303.03446},
  url   =   {https://arxiv.org/abs/2303.03446}
}