Podchaser Logo
Home
Scalable Differential Privacy for Deep Learning with Nicolas Papernot - TWiML Talk #134

Scalable Differential Privacy for Deep Learning with Nicolas Papernot - TWiML Talk #134

Released Thursday, 3rd May 2018
 1 person rated this episode
Scalable Differential Privacy for Deep Learning with Nicolas Papernot - TWiML Talk #134

Scalable Differential Privacy for Deep Learning with Nicolas Papernot - TWiML Talk #134

Scalable Differential Privacy for Deep Learning with Nicolas Papernot - TWiML Talk #134

Scalable Differential Privacy for Deep Learning with Nicolas Papernot - TWiML Talk #134

Thursday, 3rd May 2018
 1 person rated this episode
Rate Episode

In this episode of our Differential Privacy series, I'm joined by Nicolas Papernot, Google PhD Fellow in Security and graduate student in the department of computer science at Penn State University. Nicolas and I continue this week’s look into differential privacy with a discussion of his recent paper, Semi-supervised Knowledge Transfer for Deep Learning From Private Training Data. In our conversation, Nicolas describes the Private Aggregation of Teacher Ensembles model proposed in this paper, and how it ensures differential privacy in a scalable manner that can be applied to Deep Neural Networks. We also explore one of the interesting side effects of applying differential privacy to machine learning, namely that it inherently resists overfitting, leading to more generalized models. The notes for this show can be found at twimlai.com/talk/134.

Show More

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features