CS5720 - Week 4
Slide 63 of 80
Problems with Fully Connected Networks for Images
Major Challenges
💥 Parameter Explosion
A small 224×224 RGB image has 150,528 pixels. Connecting to just 1000 hidden units requires 150 million parameters!
📍 Loss of Spatial Structure
Flattening an image destroys the 2D spatial relationships between pixels that are crucial for understanding.
🔄 No Translation Invariance
The same object at different positions requires completely different learned weights.
📈 Overfitting Risk
With millions of parameters and limited data, the network easily memorizes rather than generalizes.
⚡ Computational Cost
Matrix multiplications with millions of parameters are extremely expensive and memory-intensive.
The Scale Problem
0 parameters
Image Size:
28×28
Hidden Units:
100
Memory Requirements:
Weights: 0 MB (float32)
Fully Connected vs Convolutional: A Comparison
Fully Connected Network
🔴
Every pixel connects to every neuron
Parameters (224×224 image)
~150M
Spatial Awareness
None
Translation Invariance
No
Memory Usage
Very High
Convolutional Network
🟢
Local connections with shared weights
Parameters (3×3 kernel)
~10K
Spatial Awareness
Preserved
Translation Invariance
Yes
Memory Usage
Efficient
← Previous
Next →
Prepared by Dr. Gorkem Kar
Modal Title
×
Modal content goes here...