torch.cuda.is_available() returns false in colab
Question:
I am trying to use GPU in google colab. Below are the details of the versions of pytorch and cuda installed in my colab.
Torch 1.3.1 CUDA 10.1.243
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130
I am pretty new to using a GPU for transfer learning on pytorch models. My torch.cuda.is_available() returns false and I am unabel to use a GPU. torch.backends.cudnn.enabled returns true. What might be going wrong here?
Answers:
Temporal fix may be to try Cuda 10.0 as explained in here.
Something like this:
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
In future versions this may work.
Worked with all the versions mentioned above and I did not have to downgrade my CUDA to 10.0. I had restarted my colab after the updates which set my running machine back to CPU and I just had to change it back to GPU.
Make sure your Hardware accelerator is set to GPU.
Runtime > Change runtime type > Hardware Accelerator
In case anyone else comes here and makes the same mistake I was making:
If you are trying to check if GPU is available and you do:
if torch.cuda.is_available:
print('GPU available')
else:
print('Please set GPU via Edit -> Notebook Settings.')
It will always seem that GPU is available. Note you need to use torch.cuda.is_available()
not torch.cuda.is_available
.
(This worked in Jan 2021)
pip install torch==1.7.0+cu101 torchvision==0.6.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
with my params:
!nvcc --version
# nvcc: NVIDIA (R) Cuda compiler driver
# Copyright (c) 2005-2019 NVIDIA Corporation
# Built on Sun_Jul_28_19:07:16_PDT_2019
# Cuda compilation tools, release 10.1, V10.1.243
When I install torchvision, GPU became available
!pip install torchvision
I am trying to use GPU in google colab. Below are the details of the versions of pytorch and cuda installed in my colab.
Torch 1.3.1 CUDA 10.1.243
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130
I am pretty new to using a GPU for transfer learning on pytorch models. My torch.cuda.is_available() returns false and I am unabel to use a GPU. torch.backends.cudnn.enabled returns true. What might be going wrong here?
Temporal fix may be to try Cuda 10.0 as explained in here.
Something like this:
conda install pytorch torchvision cudatoolkit=10.0 -c pytorch
In future versions this may work.
Worked with all the versions mentioned above and I did not have to downgrade my CUDA to 10.0. I had restarted my colab after the updates which set my running machine back to CPU and I just had to change it back to GPU.
Make sure your Hardware accelerator is set to GPU.
Runtime > Change runtime type > Hardware Accelerator
In case anyone else comes here and makes the same mistake I was making:
If you are trying to check if GPU is available and you do:
if torch.cuda.is_available:
print('GPU available')
else:
print('Please set GPU via Edit -> Notebook Settings.')
It will always seem that GPU is available. Note you need to use torch.cuda.is_available()
not torch.cuda.is_available
.
(This worked in Jan 2021)
pip install torch==1.7.0+cu101 torchvision==0.6.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
with my params:
!nvcc --version
# nvcc: NVIDIA (R) Cuda compiler driver
# Copyright (c) 2005-2019 NVIDIA Corporation
# Built on Sun_Jul_28_19:07:16_PDT_2019
# Cuda compilation tools, release 10.1, V10.1.243
When I install torchvision, GPU became available
!pip install torchvision