Quickstart
Contents
%matplotlib inline
Quickstart#
This section runs through the API for common tasks in machine learning. Refer to the links in each section to dive deeper.
Working with data#
PyTorch has two primitives to work with data:
torch.utils.data.DataLoader
and torch.utils.data.Dataset
.
Dataset
stores the samples and their corresponding labels, and DataLoader
wraps an iterable around
the Dataset
.
import torch
from torch import nn
from torch.utils.data import DataLoader
from torchvision import datasets
from torchvision.transforms import ToTensor
PyTorch offers domain-specific libraries such as TorchText, TorchVision, and TorchAudio, all of which include datasets. For this tutorial, we will be using a TorchVision dataset.
The torchvision.datasets
module contains Dataset
objects for many real-world vision data like
CIFAR, COCO (full list here). In this tutorial, we
use the FashionMNIST dataset. Every TorchVision Dataset
includes two arguments: transform
and
target_transform
to modify the samples and labels respectively.
# Download training data from open datasets.
training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor(),
)
# Download test data from open datasets.
test_data = datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor(),
)
We pass the Dataset
as an argument to DataLoader
. This wraps an iterable over our dataset, and supports
automatic batching, sampling, shuffling and multiprocess data loading. Here we define a batch size of 64, i.e. each element
in the dataloader iterable will return a batch of 64 features and labels.
batch_size = 64
# Create data loaders.
train_dataloader = DataLoader(training_data, batch_size=batch_size)
test_dataloader = DataLoader(test_data, batch_size=batch_size)
for X, y in test_dataloader:
print(f"Shape of X [N, C, H, W]: {X.shape}")
print(f"Shape of y: {y.shape} {y.dtype}")
break
Shape of X [N, C, H, W]: torch.Size([64, 1, 28, 28])
Shape of y: torch.Size([64]) torch.int64
Read more about loading data in PyTorch.
Creating Models#
To define a neural network in PyTorch, we create a class that inherits
from nn.Module. We define the layers of the network
in the __init__
function and specify how data will pass through the network in the forward
function. To accelerate
operations in the neural network, we move it to the GPU if available.
# Get cpu or gpu device for training.
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using {device} device")
# Define model
class NeuralNetwork(nn.Module):
def __init__(self):
super().__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = nn.Sequential(
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, 10)
)
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
return logits
model = NeuralNetwork().to(device)
print(model)
Using cuda device
NeuralNetwork(
(flatten): Flatten(start_dim=1, end_dim=-1)
(linear_relu_stack): Sequential(
(0): Linear(in_features=784, out_features=512, bias=True)
(1): ReLU()
(2): Linear(in_features=512, out_features=512, bias=True)
(3): ReLU()
(4): Linear(in_features=512, out_features=10, bias=True)
)
)
Read more about building neural networks in PyTorch.
Optimizing the Model Parameters#
To train a model, we need a loss function and an optimizer.
loss_fn = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=1e-3)
In a single training loop, the model makes predictions on the training dataset (fed to it in batches), and backpropagates the prediction error to adjust the model’s parameters.
def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
model.train()
for batch, (X, y) in enumerate(dataloader):
X, y = X.to(device), y.to(device)
# Compute prediction error
pred = model(X)
loss = loss_fn(pred, y)
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch % 100 == 0:
loss, current = loss.item(), batch * len(X)
print(f"loss: {loss:>7f} [{current:>5d}/{size:>5d}]")
We also check the model’s performance against the test dataset to ensure it is learning.
def test(dataloader, model, loss_fn):
size = len(dataloader.dataset)
num_batches = len(dataloader)
model.eval()
test_loss, correct = 0, 0
with torch.no_grad():
for X, y in dataloader:
X, y = X.to(device), y.to(device)
pred = model(X)
test_loss += loss_fn(pred, y).item()
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
test_loss /= num_batches
correct /= size
print(f"Test Error: \n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
The training process is conducted over several iterations (epochs). During each epoch, the model learns parameters to make better predictions. We print the model’s accuracy and loss at each epoch; we’d like to see the accuracy increase and the loss decrease with every epoch.
epochs = 5
for t in range(epochs):
print(f"Epoch {t+1}\n-------------------------------")
train(train_dataloader, model, loss_fn, optimizer)
test(test_dataloader, model, loss_fn)
print("Done!")
Epoch 1
-------------------------------
loss: 2.285398 [ 0/60000]
loss: 2.283791 [ 6400/60000]
loss: 2.262125 [12800/60000]
loss: 2.268898 [19200/60000]
loss: 2.245279 [25600/60000]
loss: 2.207426 [32000/60000]
loss: 2.222483 [38400/60000]
loss: 2.178722 [44800/60000]
loss: 2.179034 [51200/60000]
loss: 2.158397 [57600/60000]
Test Error:
Accuracy: 42.4%, Avg loss: 2.143707
Epoch 2
-------------------------------
loss: 2.139070 [ 0/60000]
loss: 2.142308 [ 6400/60000]
loss: 2.081449 [12800/60000]
loss: 2.114997 [19200/60000]
loss: 2.044828 [25600/60000]
loss: 1.979858 [32000/60000]
loss: 2.016280 [38400/60000]
loss: 1.927974 [44800/60000]
loss: 1.935596 [51200/60000]
loss: 1.878155 [57600/60000]
Test Error:
Accuracy: 58.1%, Avg loss: 1.866661
Epoch 3
-------------------------------
loss: 1.882363 [ 0/60000]
loss: 1.868083 [ 6400/60000]
loss: 1.746198 [12800/60000]
loss: 1.808158 [19200/60000]
loss: 1.677954 [25600/60000]
loss: 1.628422 [32000/60000]
loss: 1.658126 [38400/60000]
loss: 1.556594 [44800/60000]
loss: 1.576204 [51200/60000]
loss: 1.485382 [57600/60000]
Test Error:
Accuracy: 62.0%, Avg loss: 1.499268
Epoch 4
-------------------------------
loss: 1.551270 [ 0/60000]
loss: 1.532370 [ 6400/60000]
loss: 1.379266 [12800/60000]
loss: 1.467160 [19200/60000]
loss: 1.339273 [25600/60000]
loss: 1.327581 [32000/60000]
loss: 1.345283 [38400/60000]
loss: 1.274767 [44800/60000]
loss: 1.302080 [51200/60000]
loss: 1.211262 [57600/60000]
Test Error:
Accuracy: 63.5%, Avg loss: 1.238499
Epoch 5
-------------------------------
loss: 1.304787 [ 0/60000]
loss: 1.299364 [ 6400/60000]
loss: 1.131617 [12800/60000]
loss: 1.248378 [19200/60000]
loss: 1.121003 [25600/60000]
loss: 1.133322 [32000/60000]
loss: 1.156032 [38400/60000]
loss: 1.101118 [44800/60000]
loss: 1.131034 [51200/60000]
loss: 1.054450 [57600/60000]
Test Error:
Accuracy: 64.7%, Avg loss: 1.077314
Done!
Read more about Training your model.
Saving Models#
A common way to save a model is to serialize the internal state dictionary (containing the model parameters).
torch.save(model.state_dict(), "model.pth")
print("Saved PyTorch Model State to model.pth")
Saved PyTorch Model State to model.pth
Loading Models#
The process for loading a model includes re-creating the model structure and loading the state dictionary into it.
model = NeuralNetwork()
model.load_state_dict(torch.load("model.pth"))
<All keys matched successfully>
This model can now be used to make predictions.
classes = [
"T-shirt/top",
"Trouser",
"Pullover",
"Dress",
"Coat",
"Sandal",
"Shirt",
"Sneaker",
"Bag",
"Ankle boot",
]
model.eval()
x, y = test_data[10][0], test_data[10][1]
with torch.no_grad():
pred = model(x)
predicted, actual = classes[pred[0].argmax(0)], classes[y]
print(f'Predicted: "{predicted}", Actual: "{actual}"')
Predicted: "Pullover", Actual: "Coat"
Read more about Saving & Loading your model.