Debugging Differential Privacy: A Case Study for Privacy Auditing

Florian Tramèr, Andreas Terzis, Thomas Steinke, Shuang Song, Matthew Jagielski and Nicholas Carlini   (reverse-alphabetical author ordering)



Abstract

Differential Privacy can provide provable privacy guarantees for training data in machine learning. However, the presence of proofs does not preclude the presence of errors. Inspired by recent advances in auditing which have been used for estimating lower bounds on differentially private algorithms, here we show that auditing can also be used to find flaws in (purportedly) differentially private schemes. In this case study, we audit a recent open source implementation of a differentially private deep learning algorithm and find, with 99.99999999% confidence, that the implementation does not satisfy the claimed differential privacy guarantee.


BibTeX
@misc{TTSS+22,
  author   =   {Tram{\`e}r, Florian and Terzis, Andreas and Steinke, Thomas and Song, Shuang and Jagielski, Matthew and Carlini, Nicholas},
  title   =   {Debugging Differential Privacy: A Case Study for Privacy Auditing},
  year   =   {2022},
  howpublished   =   {arXiv preprint arXiv:2202.12219},
  url   =   {https://arxiv.org/abs/2202.12219}
}