Large Language Models Can Be Strong Differentially Private Learners

Xuechen Li, Florian Tramèr, Percy Liang and Tatsunori Hashimoto

International Conference on Learning Representations (ICLR) 2022 (Oral Presentation)

Previously presented at NeurIPS 2021 Workshop Privacy in Machine Learning (PRIML) (Oral Presentation)



Abstract

Differentially Private (DP) learning has seen limited success for building large deep learning models of text, and attempts at straightforwardly applying Differentially Private Stochastic Gradient Descent (DP-SGD) to NLP tasks have resulted in large performance drops and high computational overhead. We show that this performance drop can be mitigated with (1) the use of large pretrained models; (2) hyperparameters that suit DP optimization; and (3) fine-tuning objectives aligned with the pretraining procedure. With these factors set right, we obtain private NLP models that outperform state-of-the-art private training approaches and strong non-private baselines – by directly fine-tuning pretrained models with DP optimization on moderately-sized corpora. To address the computational challenge of running DP-SGD with large Transformers, we propose a memory saving technique that allows clipping in DP-SGD to run without instantiating per-example gradients for any layer in the model. The technique enables privately training Transformers with almost the same memory cost as non-private training at a modest run-time overhead. Contrary to conventional wisdom that DP optimization fails at learning high-dimensional models (due to noise that scales with dimension) empirical results reveal that private learning with pretrained models tends to not suffer from dimension-dependent performance degradation.


BibTeX
@inproceedings{LTLH22,
  author   =   {Li, Xuechen and Tram{\`e}r, Florian and Liang, Percy and Hashimoto, Tatsunori},
  title   =   {Large Language Models Can Be Strong Differentially Private Learners},
  booktitle   =   {International Conference on Learning Representations (ICLR)},
  year   =   {2022},
  howpublished   =   {arXiv preprint arXiv:2110.05679},
  url   =   {https://arxiv.org/abs/2110.05679}
}