Membership Inference Attacks Cannot Prove that a Model Was Trained On Your Data

Jie Zhang, Debeshee Das, Gautam Kamath and Florian Tramèr



Abstract

We consider the problem of a training data proof, where a data creator or owner wants to demonstrate to a third party that some machine learning model was trained on their data. Training data proofs play a key role in recent lawsuits against foundation models trained on web-scale data. Many prior works suggest to instantiate training data proofs using membership inference attacks. We argue that this approach is fundamentally unsound: to provide convincing evidence, the data creator needs to demonstrate that their attack has a low false positive rate, i.e., that the attack’s output is unlikely under the null hypothesis that the model was not trained on the target data. Yet, sampling from this null hypothesis is impossible, as we do not know the exact contents of the training set, nor can we (efficiently) retrain a large foundation model. We conclude by offering two paths forward, by showing that data extraction attacks and membership inference on special canary data can be used to create sound training data proofs.


BibTeX
@misc{ZDKT24,
  author   =   {Zhang, Jie and Das, Debeshee and Kamath, Gautam and Tram{\`e}r, Florian},
  title   =   {Membership Inference Attacks Cannot Prove that a Model Was Trained On Your Data},
  year   =   {2024},
  howpublished   =   {arXiv preprint arXiv:2409.19798},
  url   =   {https://arxiv.org/abs/2409.19798}
}