RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle)` with GPU only

Question:

I’m working on the CNN with one-dimensional signal. It works totally fine with CPU device. However, when I training model in GPU, CUDA error occurred. I set os.environ['CUDA_LAUNCH_BLOCKING'] = "1" command after I got RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling cublasCreate(handle). With doing this, a cublasSgemm error occurred instead of cublasCreate error.
Though the nvidia document doubt the hardware problem, I can training other CNN with images without any error. Below is my code for the data loading and set data in training model.

    idx = np.arange(len(dataset))  # dataset & label shuffle in once
    np.random.shuffle(idx)

    dataset = dataset[idx]
    sdnn = np.array(sdnn)[idx.astype(int)]        

    train_data, val_data = dataset[:int(0.8 * len(dataset))], dataset[int(0.8 * len(dataset)):]
    train_label, val_label = sdnn[:int(0.8 * len(sdnn))], sdnn[int(0.8 * len(sdnn)):]
    train_set = DataLoader(dataset=train_data, batch_size=opt.batch_size, num_workers=opt.workers)

    for i, data in enumerate(train_set, 0):  # data.shape = [batch_size, 3000(len(signal)), 1(channel)] tensor

        x = data.transpose(1, 2)
        label = torch.Tensor(train_label[i * opt.batch_size:i * opt.batch_size + opt.batch_size])
        x = x.to(device, non_blocking=True)
        label = label.to(device, non_blocking=True) # [batch size]
        label = label.view([len(label), 1])
        optim.zero_grad()

        # Feature of signal extract
        y_predict = model(x) # [batch size, fc3 output] # Error occurred HERE
        loss = mse(y_predict, label)

Below is the error message from this code.

File C:/Users/Me/Desktop/Me/Study/Project/Analysis/Regression/main.py", line 217, in Processing
    y_predict = model(x) # [batch size, fc3 output]
  File "C:Anacondaenvstorchlibsite-packagestorchnnmodulesmodule.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:UsersMEDesktopMEStudyProjectAnalysisRegressioncnn.py", line 104, in forward
    x = self.fc1(x)
  File "C:Anacondaenvstorchlibsite-packagestorchnnmodulesmodule.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "C:Anacondaenvstorchlibsite-packagestorchnnmoduleslinear.py", line 91, in forward
    return F.linear(input, self.weight, self.bias)
  File "C:Anacondaenvstorchlibsite-packagestorchnnfunctional.py", line 1674, in linear
    ret = torch.addmm(bias, input, weight.t())
RuntimeError: CUDA error: CUBLAS_STATUS_EXECUTION_FAILED when calling `cublasSgemm( handle, opa, opb, m, n, k, &alpha, a, lda, b, ldb, &beta, c, ldc)`

I’ve tried to solve this error for weeks but can’t find the solution. If you can see anything wrong here, please let me know.

Asked By: Y.Jang

||

Answers:

With searched with the partial keywords, I finally got the similar situation.
Because of the stability, I used the CUDA 10.2 version. The reference asked to upgrade CUDA toolkit to higher – 11.2 in my case – and problem solved!
I’ve deal with other training processes but this one only caused error. As the CUDA error occurred with various reasons, changes the version could be counted for solution.

Answered By: Y.Jang

Please know that, it can also be caused if you have a mismatch between the dimension of your input tensor and the dimensions of your nn.Linear module. (ex. input.shape = (a, b) and nn.Linear(c, c, bias=False) with c not matching).

Answered By: Loich

Rightly said by Loich, and I think shape mismatch is a prime reason why this error is thrown.

I too got this error while training a image recognition model, where the shapes of – output of final Conv2d and input of first Linear layers was not same.

If none of that works, then the best thing to do is to run a smaller version of the process on CPU and recreate the error. When running it on CPU instead of CUDA, you will get a more useful traceback that can solve your error.

One remedy explained in this answer (quoted above) is, with disabled gpu try to recreate similar situation by executing the code (without changing any line) on cpu, it should give better and understandable error.

P.S.: Although, the original question states that their code is executing fine on cpu, I’ve posted this answer for someone with similar error and not as a result of Cuda version mismatch.

Answered By: theProcrastinator

I met this problem today. Thanks to Y.Jang, I finally found out that the reason is that the CUDA version of our group server is too old. Because updating CUDA requires root, I installed an older version of PyTorch corresponding to the CUDA version, which can be found here.

Answered By: Zhiwei

Putting another answer here which solved the issue for me:

You will see the exact same error message if you use an instance of nn.Embedding which receives an input index which is outside the pre-defined vocabulary range. So if you created the Embedding for 100 units, and you input the index 100 (the Embedding now expects inputs from 0-99!), you end up with this CUDA Error which is super hard to track down to the embedding.

Answered By: PKlumpp

the most simple hack that has worked me almost everytime is restarting the session or the machine itself. I see it happens because of over accumulated cache.

Answered By: vijayt

I got this error when using fairseq. Cuda version installed on my amazon linux 2 is 11.5 and torch version is 1.13.1. I uninstalled it through and installed version 1.12.1 which took me past this error stage.

Answered By: upadrasta84

Running the same code in native python instead of jupyter notebook solved my problem. It seems to be a problem with jupyter’s kernel and cuda.

Answered By: Frtna2