CS5720 - Week 3
Slide 50 of 60
Interactive Deep Network Training Demo
Network Layers
3 layers
Neurons per Layer
8 neurons
Learning Rate
0.01
Batch Size
16
32
64
128
Activation Function
ReLU
Sigmoid
Tanh
Leaky ReLU
Optimizer
SGD
SGD + Momentum
Adam
RMSprop
Training Loss
0.000
Accuracy
0.0%
Epoch
0 / 100
Start Training
Pause
Reset
← Previous
Next →
Prepared by Dr. Gorkem Kar
Modal Title
×
Modal content goes here...