Open main menu

CDOT Wiki β

Changes

DPS921/PyTorch: Convolutional Neural Networks

265 bytes added, 13:17, 30 November 2020
Parallelization Methods
<code>loss_fn(outputs, labels).backward()</code>
<code>optimizer.step()</code>
 
model = ToyModel()
loss_fn = nn.MSELoss()
optimizer = optim.SGD(model.parameters(), lr=0.001)
 
optimizer.zero_grad()
outputs = model(torch.randn(20, 10))
labels = torch.randn(20, 5).to('cuda:1')
loss_fn(outputs, labels).backward()
optimizer.step()
The backward() and torch.optim will automatically take care of gradients as if the model is on one GPU. You only need to make sure that the labels are on the same device as the outputs when calling the loss function.
56
edits