gpu

How to get the device type of a pytorch module conveniently?

How to get the device type of a pytorch module conveniently? Question: I have to stack some my own layers on different kinds of pytorch models with different devices. E.g. A is a cuda model and B is a cpu model (but I don’t know it before I get the device type). Then the new …

Total answers: 4

Get total amount of free GPU memory and available using pytorch

Get total amount of free GPU memory and available using pytorch Question: I’m using google colab free Gpu’s for experimentation and wanted to know how much GPU Memory available to play around, torch.cuda.memory_allocated() returns the current GPU memory occupied, but how do we determine total available memory using PyTorch. Asked By: Hari Prasad || Source …

Total answers: 3

How to install nvidia apex on Google Colab

How to install nvidia apex on Google Colab Question: what I did is follow the instruction on the official github site !git clone https://github.com/NVIDIA/apex !cd apex !pip install -v –no-cache-dir ./ it gives me the error: ERROR: Directory ‘./’ is not installable. Neither ‘setup.py’ nor ‘pyproject.toml’ found. Exception information: Traceback (most recent call last): File …

Total answers: 8

Training a simple model in Tensorflow GPU slower than CPU

Training a simple model in Tensorflow GPU slower than CPU Question: I have set up a simple linear regression problem in Tensorflow, and have created simple conda environments using Tensorflow CPU and GPU both in 1.13.1 (using CUDA 10.0 in the backend on an NVIDIA Quadro P600). However, it looks like the GPU environment always …

Total answers: 5

OpenCV 4.0 and AMD processor Python

OpenCV 4.0 and AMD processor Python Question: Can I somehow use my AMD GPU to speed up computations in my Python script? I’m doing object detection using OpenCV 4.0 with cv2.dnn module. Basing on similar questions I’ve tried to use cv2.UMat but it doesn’t speed up computations, so I assume that the script was still …

Total answers: 1

Moving member tensors with module.to() in PyTorch

Moving member tensors with module.to() in PyTorch Question: I am building a Variational Autoencoder (VAE) in PyTorch and have a problem writing device agnostic code. The Autoencoder is a child of nn.Module with an encoder and decoder network, which are too. All weights of the network can be moved from one device to another by …

Total answers: 5

Error after installing pip tensorflow-gpu with cuda 10

Error after installing pip tensorflow-gpu with cuda 10 Question: I want to use only the pip version of tensorflow as in conda version if tensorflow-gpu gets error code runs on cpu which is undesirable. After installing cuda 10 and cudnn for ubuntu 18.0.4 when I import tensorflow it gives me the following error. PS: I …

Total answers: 4

How to tell PyTorch to not use the GPU?

How to tell PyTorch to not use the GPU? Question: I want to do some timing comparisons between CPU & GPU as well as some profiling and would like to know if there’s a way to tell pytorch to not use the GPU and instead use the CPU only? I realize I could install another …

Total answers: 6

what is XLA_GPU and XLA_CPU for tensorflow

what is XLA_GPU and XLA_CPU for tensorflow Question: I can list gpu devices sing the following tensorflow code: import tensorflow as tf from tensorflow.python.client import device_lib print(device_lib.list_local_devices()) The result is: [name: "/device:CPU:0" device_type: "CPU" memory_limit: 268435456 locality { } incarnation: 17897160860519880862, name: "/device:XLA_GPU:0" device_type: "XLA_GPU" memory_limit: 17179869184 locality { } incarnation: 9751861134541508701 physical_device_desc: "device: XLA_GPU …

Total answers: 1

How do I use TensorFlow GPU?

How do I use TensorFlow GPU? Question: How do I use TensorFlow GPU version instead of CPU version in Python 3.6 x64? import tensorflow as tf Python is using my CPU for calculations. I can notice it because I have an error: Your CPU supports instructions that this TensorFlow binary was not compiled to use: …

Total answers: 10