CS5720 - Week 13
Slide 245 of 260

Differential Privacy in Deep Learning

Mathematical Privacy Guarantee

Differential Privacy provides a mathematical guarantee that the output of an algorithm is nearly identical whether or not any individual's data is included.
Pr[M(D₁) ∈ S] ≤ e^ε × Pr[M(D₂) ∈ S]
🧠 Intuitive Understanding
If Alice's data is in the dataset or not, an attacker shouldn't be able to tell the difference from the AI model's behavior. The smaller ε (epsilon), the stronger the privacy protection!
⚖️ Privacy-Utility Tradeoff
Stronger privacy (smaller ε) means adding more noise, which can reduce model accuracy. Finding the right balance is crucial for practical applications.

DP Mechanisms

Implementation Methods for differential privacy in deep learning:
  • 📊
    Laplace Mechanism
    Add noise proportional to sensitivity and ε
  • 🌊
    Gaussian Mechanism
    Add Gaussian noise for (ε,δ)-differential privacy
  • 🎯
    DP-SGD
    Differentially private stochastic gradient descent
  • 🔗
    Private Aggregation
    Securely combine gradients with privacy guarantees

Privacy Budget: Understanding Epsilon (ε)

High Privacy
ε = 0.1
Strong protection but reduced accuracy.
Medical records, financial data
Balanced Privacy
ε = 1.0
Good balance of privacy and utility.
User behavior, demographics
Lower Privacy
ε = 10.0
Minimal protection, high accuracy.
Aggregated statistics, public data
Key Insight: Differential privacy is composable - privacy costs accumulate with each query!
Prepared by Dr. Gorkem Kar