CS5720 - Week 9
Slide 171 of 180

PyTorch Tensors and Autograd

PyTorch Tensors

🔢 What are Tensors?
Multi-dimensional arrays that are the fundamental building blocks of PyTorch, similar to NumPy arrays but with GPU support.
⚡ Creating Tensors
Multiple ways to create tensors: from data, with specific values, or with special initialization methods.
🎯 Tensor Operations
Mathematical operations, reshaping, indexing, and broadcasting - all optimized for GPU computation.
🚀 GPU Acceleration
Moving tensors to GPU for massive performance improvements in deep learning computations.

Automatic Differentiation

  • 🔄
    Autograd Engine
    Automatic computation of gradients for backpropagation
  • 📊
    Computational Graph
    Dynamic graph construction tracking all operations
  • Gradient Computation
    Efficient backward pass through the computation graph
  • 📝
    requires_grad
    Control which tensors need gradient computation

Tensor Creation and Autograd Demo

Scalar (0D)
torch.Size([])
Vector (1D)
torch.Size([3])
Matrix (2D)
torch.Size([3, 4])
Tensor (3D)
torch.Size([2, 3, 4])
Basic Tensor Operations
import torch

# Create tensors
x = torch.tensor([1, 2, 3], dtype=torch.float32)
y = torch.ones(3, 2)
z = torch.randn(2, 3, 4)

# Basic operations
result = x + 5
reshaped = z.view(-1, 4)

print(f"Shape: {result.shape}")
print(f"Device: {result.device}")
Autograd in Action
# Enable gradient computation
x = torch.tensor([2.0], requires_grad=True)
y = torch.tensor([3.0], requires_grad=True)

# Forward pass
z = x * y + x**2
loss = z.sum()

# Backward pass
loss.backward()

print(f"dL/dx: {x.grad}")  # 7.0
print(f"dL/dy: {y.grad}")  # 2.0
Prepared by Dr. Gorkem Kar