CS5720 - Week 2
Slide 39 of 40
Hands-On: Training Your First Neural Network
Training Steps
1
Prepare Your Data
Load, normalize, and split your dataset into training and validation sets
2
Define the Model
Create neural network architecture with layers, activation functions
3
Choose Loss & Optimizer
Select appropriate loss function and optimization algorithm
4
Train the Model
Run training loop with forward pass, loss calculation, and backpropagation
5
Monitor & Evaluate
Track metrics, validate performance, and adjust hyperparameters
Complete Code Example
🐍 PyTorch Implementation
Copy Code
import
torch
import
torch.nn
as
nn
import
torch.optim
as
optim
from
torch.utils.data
import
DataLoader, TensorDataset
# Step 1: Prepare Data
def
prepare_data
():
# Load your dataset (e.g., from sklearn)
from
sklearn.datasets
import
make_classification X, y =
make_classification
(n_samples=
1000
, n_features=
20
)
# Convert to tensors
X = torch.
FloatTensor
(X) y = torch.
LongTensor
(y)
# Create dataset and dataloader
dataset =
TensorDataset
(X, y) dataloader =
DataLoader
(dataset, batch_size=
32
, shuffle=
True
)
return
dataloader
# Step 2: Define Model
class
NeuralNetwork
(nn.Module):
def
__init__
(self):
super
().__init__() self.layers = nn.
Sequential
( nn.
Linear
(
20
,
64
), nn.
ReLU
(), nn.
Dropout
(
0.3
), nn.
Linear
(
64
,
32
), nn.
ReLU
(), nn.
Linear
(
32
,
2
) )
def
forward
(self, x):
return
self.
layers
(x)
🚀 Training Loop
Copy Code
# Step 3: Setup training
model =
NeuralNetwork
() criterion = nn.
CrossEntropyLoss
() optimizer = optim.
Adam
(model.parameters(), lr=
0.001
)
# Step 4: Training loop
def
train_model
(model, dataloader, epochs=
100
): model.
train
()
for
epoch
in
range
(epochs): total_loss =
0
correct =
0
total =
0
for
batch_data, batch_labels
in
dataloader:
# Forward pass
outputs =
model
(batch_data) loss =
criterion
(outputs, batch_labels)
# Backward pass
optimizer.
zero_grad
() loss.
backward
() optimizer.
step
()
# Statistics
total_loss += loss.
item
() _, predicted = torch.
max
(outputs.data,
1
) total += batch_labels.
size
(
0
) correct += (predicted == batch_labels).
sum
().
item
()
# Print progress
accuracy =
100
* correct / total avg_loss = total_loss /
len
(dataloader)
print
(
f'Epoch {epoch+1}: Loss={avg_loss:.4f}, Accuracy={accuracy:.2f}%'
)
# Run training
dataloader =
prepare_data
()
train_model
(model, dataloader)
🚀 Interactive Training Simulation
Neural Network Architecture
Input (20)
I1
I2
⋮
I20
Hidden (64)
H1
H2
H3
⋮
H64
Hidden (32)
H1
H2
⋮
H32
Output (2)
O1
O2
Training Progress
Loss
2.456
Accuracy
45%
Epoch
0
Learning Rate
0.001
🎮 Training Controls
▶️ Start Training
⏸️ Pause
🔄 Reset
💡 Click "Start Training" to begin the simulation!
📊 Watch the loss decrease and accuracy increase
🎯 Goal: Achieve >90% accuracy with low loss
← Previous
Next →
Prepared by Dr. Gorkem Kar
Modal Title
×
Modal content goes here...