gpu

RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle)` with GPU only

RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle)` with GPU only Question: I’m working on the CNN with one-dimensional signal. It works totally fine with CPU device. However, when I training model in GPU, CUDA error occurred. I set os.environ[‘CUDA_LAUNCH_BLOCKING’] = "1" command after I got RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling cublasCreate(handle). With doing …

Total answers: 8

Tensorflow GPU Could not load dynamic library 'cusolver64_10.dll'; dlerror: cusolver64_10.dll not found

Tensorflow GPU Could not load dynamic library 'cusolver64_10.dll'; dlerror: cusolver64_10.dll not found Question: When i run import tensorflow as tf tf.test.is_gpu_available( cuda_only=False, min_cuda_compute_capability=None ) I get the following error Asked By: Haseeb || Source Answers: Step 1 Move to C:Program FilesNVIDIA GPU Computing ToolkitCUDAv11.2bin Step 2 Rename file cusolver64_11.dll To cusolver64_10.dll cusolver64_10.dll Answered By: Haseeb …

Total answers: 4

How to check if a tensor is on cuda or send it to cuda in Pytorch?

How to check if a tensor is on cuda or send it to cuda in Pytorch? Question: I have a tensor t = torch.zeros((4, 5, 6)) How to check if it is on gpu or not, and send it to gpu and back? Asked By: Gulzar || Source Answers: From the pytorch forum use t.is_cuda, …

Total answers: 2

How do I list all currently available GPUs with pytorch?

How do I list all currently available GPUs with pytorch? Question: I know I can access the current GPU using torch.cuda.current_device(), but how can I get a list of all the currently available GPUs? Asked By: vvvvv || Source Answers: You can list all the available GPUs by doing: >>> import torch >>> available_gpus = …

Total answers: 4

FFMPEG with moviepy

FFMPEG with moviepy Question: I’m working on something that concatenate videos and adds some titles on through moviepy. As I saw on the web and on my on pc moviepy works on the CPU and takes a lot of time to save(render) a movie. Is there a way to improve the speed by running the …

Total answers: 4

How can I get the number of CUDA cores in my GPU using Python and Numba?

How can I get the number of CUDA cores in my GPU using Python and Numba? Question: I would like to know how to obtain the total number of CUDA Cores in my GPU using Python, Numba and cudatoolkit. Asked By: codeonion || Source Answers: Most of what you need can be found by combining …

Total answers: 1

How to use WMMA functions in Cupy kernels?

How to use WMMA functions in Cupy kernels? Question: How to use WMMA functions such as wmma::load_matrix_sync in cupy.RawKernel or cupy.RawModule? can someone provide a minimal example? Asked By: omer sahban || Source Answers: We can combine information on cupy RawKernel and wmma programming to provide most of the needed material. I don’t intend to …

Total answers: 1

Is there a way to pass arguments to multiple jobs in optuna?

Is there a way to pass arguments to multiple jobs in optuna? Question: I am trying to use optuna for searching hyper parameter spaces. In one particular scenario I train a model on a machine with a few GPUs. The model and batch size allows me to run 1 training per 1 GPU. So, ideally …

Total answers: 3

How to get allocated GPU spec in Google Colab

How to get allocated GPU spec in Google Colab Question: I’m using Google Colab for deep learning and I’m aware that they randomly allocate GPU’s to users. I’d like to be able to see which GPU I’ve been allocated in any given session. Is there a way to do this in Google Colab notebooks? Note …

Total answers: 3

how to programmatically determine available GPU memory with tensorflow?

how to programmatically determine available GPU memory with tensorflow? Question: For a vector quantization (k-means) program I like to know the amount of available memory on the present GPU (if there is one). This is needed to choose an optimal batch size in order to have as few batches as possible to run over the …

Total answers: 5