56
edits
Changes
→Data Parallelism
Then, you can copy all your tensors to the GPU:
mytensor = my_tensor.to(device)
However, PyTorch will only use one GPU by default. In order to run on multiple GPUs you need to use <code>DataParallel</code>: