Tight Auditing of Differentially Private Machine Learning

Milad Nasr, Jamie Hayes, Thomas Steinke, Borja Balle, Florian Tramèr, Matthew Jagielski, Nicholas Carlini and Andreas Terzis

USENIX Security Symposium 2023 (Distinguished paper award)



Abstract

Auditing mechanisms for differential privacy use probabilistic means to empirically estimate the privacy level of an algorithm. For private machine learning, existing auditing mechanisms are tight: the empirical privacy estimate (nearly) matches the algorithm’s provable privacy guarantee. But these auditing techniques suffer from two limitations. First, they only give tight estimates under implausible worst-case assumptions (e.g., a fully adversarial dataset). Second, they require thousands or millions of training runs to produce non-trivial statistical estimates of the privacy leakage.
This work addresses both issues. We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets – if the adversary can see all model updates during training. Prior auditing works rely on the same assumption, which is permitted under the standard differential privacy threat model. This threat model is also applicable, e.g., in federated learning settings. Moreover, our auditing scheme requires only two training runs (instead of thousands) to produce tight privacy estimates, by adapting recent advances in tight composition theorems for differential privacy. We demonstrate the utility of our improved auditing schemes by surfacing implementation bugs in private machine learning code that eluded prior auditing techniques.


BibTeX
@inproceedings{NHSB+23,
  author   =   {Nasr, Milad and Hayes, Jamie and Steinke, Thomas and Balle, Borja and Tram{\`e}r, Florian and Jagielski, Matthew and Carlini, Nicholas and Terzis, Andreas},
  title   =   {Tight Auditing of Differentially Private Machine Learning},
  booktitle   =   {USENIX Security Symposium},
  year   =   {2023},
  howpublished   =   {arXiv preprint arXiv:2302.07956},
  url   =   {https://arxiv.org/abs/2302.07956}
}