CS5720 - Week 2
Slide 22 of 40
Loss Functions: Measuring Errors
What is a Loss Function?
A
loss function
(also called cost function) quantifies how wrong our neural network's predictions are compared to the actual correct answers.
Why do we need it?
• To know
how wrong
we are
• To have a
single number
to minimize
• To guide the
learning process
• To compare
different models
🎯 Think of it like...
A loss function is like a GPS telling you how far you are from your destination. The bigger the loss, the further you are from the correct answer!
Common Loss Functions
📊
Regression Loss
For predicting continuous values (prices, temperatures, scores)
🏷️
Classification Loss
For predicting categories (cat/dog, spam/not spam)
🔧
Specialized Loss
For specific tasks (object detection, segmentation)
Remember:
Different problems need different loss functions. Choosing the right one is crucial for successful training!
Loss Function in Action
Excellent Prediction
0.01
Predicted: 4.99
Actual: 5.00
Almost perfect!
Okay Prediction
0.25
Predicted: 4.50
Actual: 5.00
Getting there...
Poor Prediction
4.00
Predicted: 3.00
Actual: 5.00
Needs improvement!
Goal:
Minimize the loss function to make better predictions!
← Previous
Next →
Prepared by Dr. Gorkem Kar
Modal Title
×
Modal content goes here...