How to check if a tensor is on cuda or send it to cuda in Pytorch?
Question:
I have a tensor
t = torch.zeros((4, 5, 6))
How to check if it is on gpu or not, and send it to gpu and back?
Answers:
use t.is_cuda
, t.cuda()
, t.cpu()
t = torch.randn(2,2)
t.is_cuda # returns False
t = torch.randn(2,2).cuda()
t.is_cuda # returns True
t = t.cpu()
t.is_cuda # returns False
When passing to and from gpu and cpu, new arrays are allocated on the relevant device.
@Gulzar only tells you how to check whether the tensor is on the cpu or on the gpu. You can calculate the tensor on the GPU by the following method:
t = torch.rand(5, 3)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
t = t.to(device)
I have a tensor
t = torch.zeros((4, 5, 6))
How to check if it is on gpu or not, and send it to gpu and back?
use t.is_cuda
, t.cuda()
, t.cpu()
t = torch.randn(2,2)
t.is_cuda # returns False
t = torch.randn(2,2).cuda()
t.is_cuda # returns True
t = t.cpu()
t.is_cuda # returns False
When passing to and from gpu and cpu, new arrays are allocated on the relevant device.
@Gulzar only tells you how to check whether the tensor is on the cpu or on the gpu. You can calculate the tensor on the GPU by the following method:
t = torch.rand(5, 3)
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
t = t.to(device)