The XOR (exclusive OR) gate outputs 1 when inputs are different. This simple pattern exposed a fundamental limitation of perceptrons.
Input 1 (x₁)
Input 2 (x₂)
Output (y)
0
0
0
0
1
1
1
0
1
1
1
0
⚠️ The Problem
No single line can separate the 0s from the 1s in this pattern. This is what we mean by "not linearly separable."
Why Perceptrons Fail
The Mathematical Impossibility:
• A perceptron can only draw straight lines
• XOR requires curved boundaries or multiple lines
• No weights exist that satisfy all four cases
• This limitation stalled AI research for years!
Historical Impact:
Minsky and Papert's 1969 book highlighted this limitation, causing the first "AI Winter" - a decade-long pause in neural network research.
XOR Pattern Visualization
x₂
x₁
0
1
1
0
Classification Challenge
Try to imagine drawing a single straight line that puts all red circles (0s) on one side and all green circles (1s) on the other. It's impossible!
🚫 Single Perceptron Cannot Solve XOR
This discovery was devastating but also enlightening. It showed that we needed multi-layer networks (hidden layers) to solve non-linearly separable problems. The solution? Stack perceptrons in layers!