Considerations for Differentially Private Learning with Large-Scale Public Pretraining

Florian Tramèr, Gautam Kamath and Nicholas Carlini



Abstract

The performance of differentially private machine learning can be boosted significantly by leveraging the transfer learning capabilities of non-private models pretrained on large public datasets. We critically review this approach.
We primarily question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving. We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public’s trust in differential privacy as a meaningful definition of privacy.
Beyond the privacy considerations of using public data, we further question the utility of this paradigm. We scrutinize whether existing machine learning benchmarks are appropriate for measuring the ability of pretrained models to generalize to sensitive domains, which may be poorly represented in public Web data. Finally, we notice that pretraining has been especially impactful for the largest available models – models sufficiently large to prohibit end users running them on their own devices. Thus, deploying such models today could be a net loss for privacy, as it would require (private) data to be outsourced to a more compute-powerful third party.
We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.


BibTeX
@misc{TKC22,
  author   =   {Tram{\`e}r, Florian and Kamath, Gautam and Carlini, Nicholas},
  title   =   {Considerations for Differentially Private Learning with Large-Scale Public Pretraining},
  year   =   {2022},
  howpublished   =   {arXiv preprint arXiv:2212.06470},
  url   =   {https://arxiv.org/abs/2212.06470}
}