Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy

Daphne Ippolito, Florian Tramèr, Milad Nasr, Chiyuan Zhang, Matthew Jagielski, Katherine Lee, Christopher A Choquette-Choo and Nicholas Carlini


Studying data memorization in neural language models helps us understand the risks (e.g., to privacy or copyright) associated with models regurgitating training data, and aids in the evaluation of potential countermeasures. Many prior works – and some recently deployed defenses – focus on "verbatim memorization", defined as a model generation that exactly matches a substring from the training set. We argue that verbatim memorization definitions are too restrictive and fail to capture more subtle forms of memorization. Specifically, we design and implement an efficient defense based on Bloom filters that perfectly prevents all verbatim memorization. And yet, we demonstrate that this "perfect" filter does not prevent the leakage of training data. Indeed, it is easily circumvented by plausible and minimally modified "style-transfer" prompts – and in some cases even the non-modified original prompts – to extract memorized information. For example, instructing the model to output ALL-CAPITAL texts bypasses memorization checks based on verbatim matching. We conclude by discussing potential alternative definitions and why defining memorization is a difficult yet crucial open question for neural language models.

  author   =   {Ippolito, Daphne and Tram{\`e}r, Florian and Nasr, Milad and Zhang, Chiyuan and Jagielski, Matthew and Lee, Katherine and Choquette-Choo, Christopher A and Carlini, Nicholas},
  title   =   {Preventing Verbatim Memorization in Language Models Gives a False Sense of Privacy},
  year   =   {2022},
  howpublished   =   {arXiv preprint arXiv:2210.17546},
  url   =   {}