How to tell PyTorch to not use the GPU?

Question:

I want to do some timing comparisons between CPU & GPU as well as some profiling and would like to know if there’s a way to tell to not use the GPU and instead use the CPU only? I realize I could install another CPU-only , but hoping there’s an easier way.

Asked By: aneccodeal

||

Answers:

Before running your code, run this shell command to tell torch that there are no GPUs:

export CUDA_VISIBLE_DEVICES=""

This will tell it to use only one GPU (the one with id 0) and so on:

export CUDA_VISIBLE_DEVICES="0"
Answered By: Umang Gupta

I just wanted to add that it is also possible to do so within the PyTorch Code:

Here is a small example taken from the PyTorch Migration Guide for 0.4.0:

# at beginning of the script
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")

...

# then whenever you get a new Tensor or Module
# this won't copy if they are already on the desired device
input = data.to(device)
model = MyModule(...).to(device)

I think the example is pretty self-explaining. But if there are any questions just ask!
One big advantage is when using this syntax like in the example above is, that you can create code which runs on CPU if no GPU is available but also on GPU without changing a single line.

Instead of using the if-statement with torch.cuda.is_available() you can also just set the device to CPU like this:

device = torch.device("cpu")

Further you can create tensors on the desired device using the device flag:

mytensor = torch.rand(5, 5, device=device)

This will create a tensor directly on the device you specified previously.

I want to point out, that you can switch between CPU and GPU using this syntax, but also between different GPUs.

I hope this is helpful!

Answered By: MBT

Simplest way using Python is:

os.environ["CUDA_VISIBLE_DEVICES"]=""
Answered By: Milad shiri

General

As previous answers showed you can make your pytorch run on the cpu using:

device = torch.device("cpu")

Comparing Trained Models

I would like to add how you can load a previously trained model on the cpu (examples taken from the pytorch docs).

Note: make sure that all the data inputted into the model also is on the cpu.

Recommended loading

model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH, map_location=torch.device("cpu")))

Loading entire model

model = torch.load(PATH, map_location=torch.device("cpu"))
Answered By: Daan Seuntjens

This is a real world example: original function with gpu, versus new function with cpu.

Source: https://github.com/zllrunning/face-parsing.PyTorch/blob/master/test.py

In my case I have edited these 4 lines of code:

#totally new line of code
device=torch.device("cpu")



#net.cuda()
net.to(device)

#net.load_state_dict(torch.load(cp))
net.load_state_dict(torch.load(cp, map_location=torch.device('cpu')))

#img = img.cuda()
img = img.to(device)

#new_function_with_cpu
def evaluate(image_path='./imgs/116.jpg', cp='cp/79999_iter.pth'):
    device=torch.device("cpu")
    n_classes = 19
    net = BiSeNet(n_classes=n_classes)
    #net.cuda()
    net.to(device)
    #net.load_state_dict(torch.load(cp))
    net.load_state_dict(torch.load(cp, map_location=torch.device('cpu')))
    net.eval()

    to_tensor = transforms.Compose([transforms.ToTensor(),transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),])

    with torch.no_grad():
        img = Image.open(image_path)
        image = img.resize((512, 512), Image.BILINEAR)
        img = to_tensor(image)
        img = torch.unsqueeze(img, 0)
        #img = img.cuda()
        img = img.to(device)
        out = net(img)[0]
        parsing = out.squeeze(0).cpu().numpy().argmax(0)
        return parsing


















#original_function_with_gpu


def evaluate(image_path='./imgs/116.jpg', cp='cp/79999_iter.pth'):

    n_classes = 19
    net = BiSeNet(n_classes=n_classes)
    net.cuda()
    net.load_state_dict(torch.load(cp))
    net.eval()

    to_tensor = transforms.Compose([ transforms.ToTensor(), transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225)),])

    with torch.no_grad():
        img = Image.open(image_path)
        image = img.resize((512, 512), Image.BILINEAR)
        img = to_tensor(image)
        img = torch.unsqueeze(img, 0)
        img = img.cuda()
        out = net(img)[0]
        parsing = out.squeeze(0).cpu().numpy().argmax(0)
        return parsing





Answered By: quine9997

There are multiple ways to force CPU use:

  1. Set default tensor type:

    torch.set_default_tensor_type(torch.FloatTensor)
    
  2. Set device and consistently reference when creating tensors:
    (with this you can easily switch between GPU and CPU)

    device = 'cpu'
    # ...
    x = torch.rand(2, 10, device=device)
    
  3. Hide GPU from view:

    import os
    
    os.environ["CUDA_VISIBLE_DEVICES"]=""
    
Answered By: iacob
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.