CS5720 - Week 2
Slide 34 of 40

Early Stopping

The Concept

Early Stopping is a regularization technique that stops training when the model's performance on validation data starts to deteriorate, preventing overfitting.
Key Idea:

• Monitor validation loss during training
• When validation loss stops improving (or gets worse), stop training
• Use the best model from earlier in training
• Prevents the model from overfitting to training data
Why It Works: Training loss usually keeps decreasing, but validation loss may start increasing when overfitting begins.

Implementation Strategies

🕐 Patience Strategy
Wait for a certain number of epochs without improvement before stopping
📊 Min Delta Strategy
Only consider improvements above a minimum threshold significant
💾 Best Model Restoration
Save the best model and restore it when stopping early
📈 Monitoring Metrics
Choose appropriate metrics to monitor (loss, accuracy, F1-score, etc.)

Interactive Early Stopping Simulation

Patience
5 epochs
Min Delta
0.001
Learning Rate
0.01
Current Epoch
0
Training Loss
1.000
Validation Loss
1.000
Best Val Loss
1.000
Patience Counter
0

🛑 Early Stopping Triggered!

Training stopped at epoch 0 to prevent overfitting.

Prepared by Dr. Gorkem Kar