Measuring Non-Adversarial Reproduction of Training Data in Large Language Models

Michael Aerni, Javier Rando, Edoardo Debenedetti, Nicholas Carlini, Daphne Ippolito and Florian Tramèr



Abstract

Large language models memorize parts of their training data. Memorizing short snippets and facts is required to answer questions about the world and to be fluent in any language. But models have also been shown to reproduce long verbatim sequences of memorized text when prompted by a motivated adversary. In this work, we investigate an intermediate regime of memorization that we call non-adversarial reproduction, where we quantify the overlap between model responses and pretraining data when responding to natural and benign prompts. For a variety of innocuous prompt categories (e.g., writing a letter or a tutorial), we show that up to 15% of the text output by popular conversational language models overlaps with snippets from the Internet. In worst cases, we find generations where 100% of the content can be found exactly online. For the same tasks, we find that human-written text has far less overlap with Internet data. We further study whether prompting strategies can close this reproduction gap between models and humans. While appropriate prompting can reduce non-adversarial reproduction on average, we find that mitigating worst-case reproduction of training data requires stronger defenses – even for benign interactions.


BibTeX
@misc{ARDC+24,
  author   =   {Aerni, Michael and Rando, Javier and Debenedetti, Edoardo and Carlini, Nicholas and Ippolito, Daphne and Tram{\`e}r, Florian},
  title   =   {Measuring Non-Adversarial Reproduction of Training Data in Large Language Models},
  year   =   {2024},
  howpublished   =   {arXiv preprint arXiv:2411.10242},
  url   =   {https://arxiv.org/abs/2411.10242}
}