Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware

Florian Tramèr and Dan Boneh



As Machine Learning (ML) gets applied to security-critical or sensitive domains, there is a growing need for integrity and privacy guarantees for ML computations running in untrusted environments. A pragmatic solution comes from Trusted Execution Environments, which use hardware and software protections to isolate sensitive computations from the untrusted software stack. However, these isolation guarantees come at a price in performance, compared to untrusted alternatives. This paper initiates the study of high performance execution of Deep Neural Networks (DNNs) in trusted environments by efficiently partitioning computations between trusted and untrusted devices. Building upon a simple secure outsourcing scheme for matrix multiplication, we propose Slalom, a framework that outsources execution of all linear layers in a DNN from any trusted environment (e.g., SGX, TrustZone or Sanctum) to a faster co-located device. We evaluate Slalom by executing DNNs in an Intel SGX enclave, which selectively outsources work to an untrusted GPU. For two canonical DNNs, VGG16 and MobileNet, we obtain 20x and 6x increases in throughput for verifiable inference, and 10x and 3.5x for verifiable and private inference.

  author   =   {Tram{\`e}r, Florian and Boneh, Dan},
  title   =   {{Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware}},
  year   =   {2018},
  howpublished   =   {arXiv preprint arXiv:1806.03287},
  note   =   {\url{}}