Pytorch RuntimeError: The size of tensor a (120000) must match the size of tensor b (2) at non-singleton dimension 2

Question:

I use code from here to train a model to predict the function of DNA:

The code with bug as follow.

class upd_GELU(nn.Module):
    def __init__(self):
        super(upd_GELU, self).__init__()
        self.constant_param = nn.Parameter(torch.Tensor([1,702]))
        self.sig = nn.Sigmoid()
        
    def forward(self, input: Tensor) -> Tensor:
        print(self.constant_param.shape)
        print(input.shape)
        outval = torch.mul(self.sig(torch.mul(self.constant_param, input)), input)
        return outval

error message as follow:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
Input In [73], in <cell line: 2>()
      1 net = BasenjiModel()
----> 2 summary(net, input_size = [(4, 120000)], batch_size = BATCH_SIZE, device = "cpu")
      4 def opt_rule(epoch):
      5     if epoch >= 34:

File ~/miniconda3/lib/python3.8/site-packages/torchsummary/torchsummary.py:72, in summary(model, input_size, batch_size, device)
     68 model.apply(register_hook)
     70 # make a forward pass
     71 # print(x.shape)
---> 72 model(*x)
     74 # remove these hooks
     75 for h in hooks:

...(This part is omitted)

File ~/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py:1128, in Module._call_impl(self, *input, **kwargs)
   1125     bw_hook = hooks.BackwardHook(self, full_backward_hooks)
   1126     input = bw_hook.setup_input_hook(input)
-> 1128 result = forward_call(*input, **kwargs)
   1129 if _global_forward_hooks or self._forward_hooks:
   1130     for hook in (*_global_forward_hooks.values(), *self._forward_hooks.values()):

Input In [64], in upd_GELU.forward(self, input)
      8 print(self.constant_param.shape)
      9 print(input.shape)
---> 10 outval = torch.mul(self.sig(*torch.mul(input, self.constant_param)*), input)
     11 return outval

RuntimeError: The size of tensor a (120000) must match the size of tensor b (2) at non-singleton dimension 2

The size of these two tensor:

enter image description here

How can I fix it? Thanks a lot.

Why wasn’t it broadcast?

Asked By: Da-qiong

||

Answers:

To use the function torch.mul tensors should be with the same number of dimensions. However in your case the multiplied tensors are not with the same dimension. Same as in this example:

import torch
a = torch.randn(2)
b = torch.randn(2,4,120000)
print("shape of a:" + str(a.shape))
print("shape of b:" + str(b.shape))
result = torch.mul(a,b)

Output:

shape of a:torch.Size([2])
shape of b:torch.Size([2, 4, 120000])    
The size of tensor a (2) must match the size of tensor b (120000) at non- 
singleton dimension 2

You should add dimensions to the first tensor in the example case is a, as follow:

import torch
a = torch.randn(2).unsqueeze(1).unsqueeze(2)
b = torch.randn(2,4,120000)
print("shape of a:" + str(a.shape))
print("shape of b:" + str(b.shape))
result = torch.mul(a,b)
print("shape of result:"+str(result.shape))

Output:

shape of a:torch.Size([2, 1, 1])
shape of b:torch.Size([2, 4, 120000])
shape of result:torch.Size([2, 4, 120000])

In you example, this modification should solve the problem:

class upd_GELU(nn.Module):
def __init__(self):
    super(upd_GELU, self).__init__()
    self.constant_param = nn.Parameter(torch.Tensor([1,702]))
    self.sig = nn.Sigmoid()
    
def forward(self, input: Tensor) -> Tensor:
    self.constant_param = self.constant_param.unsqueeze(1).unsqueeze(2)
    outval = torch.mul(self.sig(torch.mul(self.constant_param, input)), input)
    return outval
Answered By: A.Mounir
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.