CS5720 - Week 8
Slide 147 of 160
GAN Training Process
Training Steps
1
Initialize Networks
Initialize both Generator and Discriminator with random weights
2
Train Discriminator
Update D to maximize log(D(x)) + log(1 - D(G(z)))
3
Train Generator
Update G to minimize log(1 - D(G(z))) or maximize log(D(G(z)))
4
Repeat & Monitor
Alternate training until convergence or satisfactory results
Loss Dynamics
Generator Loss
Discriminator Loss
Training Progress
0
Epochs Completed
The Minimax Game
min
G
max
D
V(D,G) = E
x~p
data
(x)
[log D(x)] + E
z~p
z
(z)
[log(1 - D(G(z)))]
π¨
Generator's Strategy
Minimize the objective by making D(G(z)) as close to 1 as possible, fooling the discriminator into thinking fake samples are real.
π
Discriminator's Strategy
Maximize the objective by correctly classifying real samples (D(x) β 1) and fake samples (D(G(z)) β 0).
Nash Equilibrium:
At convergence, the Generator produces samples indistinguishable from real data, and the Discriminator outputs 0.5 for all samples.
β Previous
Next β
Prepared by Dr. Gorkem Kar
Modal Title
Γ
Modal content goes here...