CS5720 - Week 8
Slide 147 of 160

GAN Training Process

Training Steps

1
Initialize Networks
Initialize both Generator and Discriminator with random weights
2
Train Discriminator
Update D to maximize log(D(x)) + log(1 - D(G(z)))
3
Train Generator
Update G to minimize log(1 - D(G(z))) or maximize log(D(G(z)))
4
Repeat & Monitor
Alternate training until convergence or satisfactory results

Loss Dynamics

Generator Loss
Discriminator Loss

Training Progress

0
Epochs Completed

The Minimax Game

minG maxD V(D,G) = Ex~pdata(x)[log D(x)] + Ez~pz(z)[log(1 - D(G(z)))]
🎨
Generator's Strategy
Minimize the objective by making D(G(z)) as close to 1 as possible, fooling the discriminator into thinking fake samples are real.
πŸ”
Discriminator's Strategy
Maximize the objective by correctly classifying real samples (D(x) β†’ 1) and fake samples (D(G(z)) β†’ 0).
Nash Equilibrium:
At convergence, the Generator produces samples indistinguishable from real data, and the Discriminator outputs 0.5 for all samples.
Prepared by Dr. Gorkem Kar