1D CNN on Pytorch: mat1 and mat2 shapes cannot be multiplied (10×3 and 10×2)

Question:

I have a time series with sample of 500 size and 2 types of labels and want to construct a 1D CNN with pytorch on them:

class Simple1DCNN(torch.nn.Module):
    def __init__(self):
        super(Simple1DCNN, self).__init__()
        self.layer1 = torch.nn.Conv1d(in_channels=50, 
                                      out_channels=20, 
                                      kernel_size=5, 
                                      stride=2)
        self.act1 = torch.nn.ReLU()
        self.layer2 = torch.nn.Conv1d(in_channels=20, 
                                      out_channels=10, 
                                      kernel_size=1)
        
        self.fc1 = nn.Linear(10* 1 * 1, 2)
    def forward(self, x):
        x = x.view(1, 50,-1)
        x = self.layer1(x)
        x = self.act1(x)
        x = self.layer2(x)
        x = self.fc1(x)
        
        return x

model = Simple1DCNN()
model(torch.tensor(np.random.uniform(-10, 10, 500)).float())

But got this error message:

Traceback (most recent call last):
  File "so_pytorch.py", line 28, in <module>
    model(torch.tensor(np.random.uniform(-10, 10, 500)).float())
  File "/Users/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "so_pytorch.py", line 23, in forward
    x = self.fc1(x)
  File "/Users/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/Users/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 93, in forward
    return F.linear(input, self.weight, self.bias)
  File "/Users/lib/python3.8/site-packages/torch/nn/functional.py", line 1692, in linear
    output = input.matmul(weight.t())
RuntimeError: mat1 and mat2 shapes cannot be multiplied (10x3 and 10x2)

what am I doing wrong?

Asked By: ecjb

||

Answers:

The shape of the output of the line x = self.layer2(x) (which is also the input of the next line x = self.fc1(x)) is torch.Size([1, 10, 3]).

Now from the definition of self.fc1, it expects the last dimension of it’s input to be 10 * 1 * 1 which is 10 whereas your input has 3 hence the error.

I don’t know what it is you’re trying to do, but assuming what you want to do is;

  1. label the entire 500 size sequence to one of two labels, the you do this.
# replace self.fc1 = nn.Linear(10* 1 * 1, 2) with
self.fc1 = nn.Linear(10 * 3, 2)

# replace x = self.fc1(x) with
x = x.view(1, -1)
x = self.fc1(x)
  1. label 10 timesteps each to one of two labels, then you do this.
# replace self.fc1 = nn.Linear(10* 1 * 1, 2) with
self.fc1 = nn.Linear(2, 2)

The output shape for 1 will be (batch size, 2), and for 2 will be (batch size, 10, 2).

Answered By: Nerveless_child

The activation input x to self.fc doesn’t have the expected number of features, so you would need to change the in_features of the first nn.Linear layer in self.fc

Answered By: Reza Karimi