Counterfactual Memorization in Neural Language Models

Chiyuan Zhang, Daphne Ippolito, Katherine Lee, Matthew Jagielski, Florian Tramèr and Nicholas Carlini

Conference on Neural Information Processing Systems (NeurIPS) 2023 (Spotlight Presentation)



Abstract

Modern neural language models widely used in tasks across NLP risk memorizing sensitive information from their training data. As models continue to scale up in parameters, training data, and compute, understanding memorization in language models is both important from a learning-theoretical point of view, and is practically crucial in real world applications. An open question in previous studies of memorization in language models is how to filter out "common" memorization. In fact, most memorization criteria strongly correlate with the number of occurrences in the training set, capturing "common" memorization such as familiar phrases, public knowledge or templated texts. In this paper, we provide a principled perspective inspired by a taxonomy of human memory in Psychology. From this perspective, we formulate a notion of counterfactual memorization, which characterizes how a model’s predictions change if a particular document is omitted during training. We identify and study counterfactually-memorized training examples in standard text datasets. We further estimate the influence of each training example on the validation set and on generated texts, and show that this can provide direct evidence of the source of memorization at test time.


BibTeX
@inproceedings{ZILJ+23,
  author   =   {Zhang, Chiyuan and Ippolito, Daphne and Lee, Katherine and Jagielski, Matthew and Tram{\`e}r, Florian and Carlini, Nicholas},
  title   =   {Counterfactual Memorization in Neural Language Models},
  booktitle   =   {Conference on Neural Information Processing Systems (NeurIPS)},
  year   =   {2023},
  howpublished   =   {arXiv preprint arXiv:2112.12938},
  url   =   {https://arxiv.org/abs/2112.12938}
}