Measuring Forgetting of Memorized Training Examples

Matthew Jagielski, Om Thakkar, Florian Tramèr, Daphne Ippolito, Katherine Lee, Nicholas Carlini, Eric Wallace, Shuang Song, Abhradeep Thakurta, Nicolas Papernot and Chiyuan Zhang

International Conference on Learning Representations (ICLR) 2023



Abstract

Machine learning models exhibit two seemingly contradictory phenomena: training data memorization and various forms of forgetting. In memorization, models overfit specific training examples and become susceptible to privacy attacks. In forgetting, examples which appeared early in training are forgotten by the end. In this work, we connect these phenomena. We propose a technique to measure to what extent models “forget” the specifics of training examples, becoming less susceptible to privacy attacks on examples they have not seen recently. We show that, while non-convexity can prevent forgetting from happening in the worst-case, standard image and speech models empirically do forget examples over time. We identify nondeterminism as a potential explanation, showing that deterministically trained models do not forget. Our results suggest that examples seen early when training with extremely large datasets – for instance those examples used to pre-train a model – may observe privacy benefits at the expense of examples seen later.


BibTeX
@inproceedings{JTTI+23,
  author   =   {Jagielski, Matthew and Thakkar, Om and Tram{\`e}r, Florian and Ippolito, Daphne and Lee, Katherine and Carlini, Nicholas and Wallace, Eric and Song, Shuang and Thakurta, Abhradeep and Papernot, Nicolas and Zhang, Chiyuan},
  title   =   {Measuring Forgetting of Memorized Training Examples},
  booktitle   =   {International Conference on Learning Representations (ICLR)},
  year   =   {2023},
  howpublished   =   {arXiv preprint arXiv:2207.00099},
  url   =   {https://arxiv.org/abs/2207.00099}
}