CS5720 - Week 8
Slide 144 of 160

Variational Autoencoders (VAE) - Introduction

What Makes VAEs Special?

A Variational Autoencoder (VAE) is a generative model that learns to encode data into a probability distribution rather than a fixed point, enabling controlled generation of new data.
Key Differences from Standard Autoencoders:

Probabilistic encoding - outputs mean and variance
Continuous latent space - smooth interpolation
Generative capability - create new samples
Regularized - prevents overfitting
🎯 The Big Idea
Instead of mapping to a single point, VAEs map to a distribution. This uncertainty allows for generation and ensures similar inputs have similar representations.

VAE vs Standard Autoencoder

Aspect Standard AE VAE
Encoding Deterministic Probabilistic
Latent Space Point estimates Distributions
Generation Limited Excellent
Regularization None inherent KL divergence
Interpolation May be discontinuous Smooth & meaningful
Key Insight:
The "variational" part comes from variational inference - approximating complex distributions with simpler ones!

VAE Architecture Overview

📊
Encoder
Input → μ, σ
🎲
Sampling
z ~ N(μ, σ²)
🎨
Decoder
z → Output
📉
Loss
Recon + KL
Click on each component to learn more about how VAEs work!
Prepared by Dr. Gorkem Kar