Learn Differential Privacy for ML
Master the gold standard for mathematical privacy guarantees in machine learning. From epsilon-delta fundamentals to DP-SGD training and production-ready tools — build models that provably protect individual privacy.
Your Learning Path
Follow these lessons in order to build a complete understanding of differential privacy for ML.
1. Introduction
What is differential privacy, why it matters for ML, the intuition behind DP, and real-world deployments by Apple, Google, and the US Census.
2. DP Fundamentals
Epsilon and delta parameters, Laplace and Gaussian mechanisms, sensitivity, composition theorems, and privacy budgets.
3. DP-SGD
Differentially Private Stochastic Gradient Descent. Per-example gradient clipping, noise injection, and privacy accounting during training.
4. Local vs Global DP
Central vs local differential privacy models. Randomized response, RAPPOR, and choosing the right trust model for your application.
5. Tools & Libraries
OpenDP, Google's DP library, Opacus (PyTorch), TensorFlow Privacy, and PipelineDP for production systems.
6. Best Practices
Privacy budget management, hyperparameter tuning under DP, utility-privacy trade-offs, and common pitfalls to avoid.
What You'll Learn
By the end of this course, you'll be able to:
Understand DP Theory
Grasp epsilon-delta privacy, noise mechanisms, sensitivity, and composition theorems.
Train DP Models
Use DP-SGD to train neural networks with provable privacy guarantees.
Use DP Tools
Apply OpenDP, Opacus, and TensorFlow Privacy to real ML pipelines.
Manage Privacy Budgets
Balance privacy and utility through careful budget allocation and accounting.
Lilly Tech Systems