RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1)

Question:

Im using a Pytorch Unet model to which i am feeding in a image as input and along with that i am feeding the label as the input image mask and traning the dataset on it.
The Unet model i have picked up from somewhere else, and i am using the cross-entropy loss as a loss function but i get this dimension out of range error,

RuntimeError                              
Traceback (most recent call last)
<ipython-input-358-fa0ef49a43ae> in <module>()
     16 for epoch in range(0, num_epochs):
     17     # train for one epoch
---> 18     curr_loss = train(train_loader, model, criterion, epoch, num_epochs)
     19 
     20     # store best loss and save a model checkpoint

<ipython-input-356-1bd6c6c281fb> in train(train_loader, model, criterion, epoch, num_epochs)
     16         # measure loss
     17         print (outputs.size(),labels.size())
---> 18         loss = criterion(outputs, labels)
     19         losses.update(loss.data[0], images.size(0))
     20 

/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in     _ _call__(self, *input, **kwargs)
    323         for hook in self._forward_pre_hooks.values():
    324             hook(self, input)
--> 325         result = self.forward(*input, **kwargs)
    326         for hook in self._forward_hooks.values():
    327             hook_result = hook(self, input, result)

<ipython-input-355-db66abcdb074> in forward(self, logits, targets)
      9         probs_flat = probs.view(-1)
     10         targets_flat = targets.view(-1)
---> 11         return self.crossEntropy_loss(probs_flat, targets_flat)

/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in     __call__(self, *input, **kwargs)
    323         for hook in self._forward_pre_hooks.values():
    324             hook(self, input)
  --> 325         result = self.forward(*input, **kwargs)
    326         for hook in self._forward_hooks.values():
    327             hook_result = hook(self, input, result)

/usr/local/lib/python3.5/dist-packages/torch/nn/modules/loss.py in f orward(self, input, target)
    599         _assert_no_grad(target)
    600         return F.cross_entropy(input, target, self.weight, self.size_average,
--> 601                                self.ignore_index, self.reduce)
    602 
    603 

/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py in     cross_entropy(input, target, weight, size_average, ignore_index, reduce)
   1138         >>> loss.backward()
   1139     """
-> 1140     return nll_loss(log_softmax(input, 1), target, weight, size_average, ignore_index, reduce)
   1141 
   1142 

/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py in     log_softmax(input, dim, _stacklevel)
    784     if dim is None:
    785         dim = _get_softmax_dim('log_softmax', input.dim(),      _stacklevel)
--> 786     return torch._C._nn.log_softmax(input, dim)
    787 
    788 

RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1)

Part of my code looks like this

class crossEntropy(nn.Module):
    def __init__(self, weight = None, size_average = True):
        super(crossEntropy, self).__init__()
        self.crossEntropy_loss = nn.CrossEntropyLoss(weight, size_average)
        
    def forward(self, logits, targets):
        probs = F.sigmoid(logits)
        probs_flat = probs.view(-1)
        targets_flat = targets.view(-1)
        return self.crossEntropy_loss(probs_flat, targets_flat)


class UNet(nn.Module):
    def __init__(self, imsize):
        super(UNet, self).__init__()
        self.imsize = imsize

        self.activation = F.relu
        
        self.pool1 = nn.MaxPool2d(2)
        self.pool2 = nn.MaxPool2d(2)
        self.pool3 = nn.MaxPool2d(2)
        self.pool4 = nn.MaxPool2d(2)
        self.conv_block1_64 = UNetConvBlock(4, 64)
        self.conv_block64_128 = UNetConvBlock(64, 128)
        self.conv_block128_256 = UNetConvBlock(128, 256)
        self.conv_block256_512 = UNetConvBlock(256, 512)
        self.conv_block512_1024 = UNetConvBlock(512, 1024)

        self.up_block1024_512 = UNetUpBlock(1024, 512)
        self.up_block512_256 = UNetUpBlock(512, 256)
        self.up_block256_128 = UNetUpBlock(256, 128)
        self.up_block128_64 = UNetUpBlock(128, 64)

        self.last = nn.Conv2d(64, 2, 1)


    def forward(self, x):
        block1 = self.conv_block1_64(x)
        pool1 = self.pool1(block1)

        block2 = self.conv_block64_128(pool1)
        pool2 = self.pool2(block2)

        block3 = self.conv_block128_256(pool2)
        pool3 = self.pool3(block3)

        block4 = self.conv_block256_512(pool3)
        pool4 = self.pool4(block4)

        block5 = self.conv_block512_1024(pool4)

        up1 = self.up_block1024_512(block5, block4)

        up2 = self.up_block512_256(up1, block3)

        up3 = self.up_block256_128(up2, block2)

        up4 = self.up_block128_64(up3, block1)

        return F.log_softmax(self.last(up4))
Asked By: Ryan

||

Answers:

According to your code:

probs_flat = probs.view(-1)
targets_flat = targets.view(-1)
return self.crossEntropy_loss(probs_flat, targets_flat)

You are giving two 1d tensor to nn.CrossEntropyLoss but according to documentation, it expects:

Input: (N,C) where C = number of classes
Target: (N) where each value is 0 <= targets[i] <= C-1
Output: scalar. If reduce is False, then (N) instead.

I believe that is the cause of the problem you are encountering.

Answered By: Wasi Ahmad

The problem is that you are passing in bad arguments to torch.nn.CrossEntropyLoss in your classification problem.

Specifically, in this line

---> 18         loss = criterion(outputs, labels)

the argument labels is not what CrossEntropyLoss is expecting. labels should be a 1-D array. The length of this array should be the batch size that matches outputs in your code. The value of each element should be the 0-based target class ID.

Here’s an example.

Suppose you have batch size B=2, and each data instance is given one of K=3 classes.

Further, suppose that the final layer of your neural network is outputting the following raw logits (the values before softmax) for each of the two instances in your batch. Those logits and the true label for each data instance are shown below.

                Logits (before softmax)
               Class 0  Class 1  Class 2    True class
               -------  -------  -------    ----------
Instance 0:        0.5      1.5      0.1             1
Instance 1:        2.2      1.3      1.7             2

Then in order to call CrossEntropyLoss correctly, you need two variables:

  • input of shape (B, K) containing the logit values
  • target of shape B containing the index of the true class

Here’s how to correctly use CrossEntropyLoss with the values above. I am using torch.__version__ 1.9.0.

import torch

yhat = torch.Tensor([[0.5, 1.5, 0.1], [2.2, 1.3, 1.7]])
print(yhat)
# tensor([[0.5000, 1.5000, 0.1000],
#         [2.2000, 1.3000, 1.7000]])

y = torch.Tensor([1, 2]).to(torch.long)
print(y)
# tensor([1, 2])

loss = torch.nn.CrossEntropyLoss()
cel = loss(input=yhat, target=y)
print(cel)
# tensor(0.8393)

I’m guessing that the error you received originally

RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1)

probably occurred because you are trying to compute cross entropy loss for one data instance, where the target is encoded as one-hot. You probably had your data like this:

                Logits (before softmax)
               Class 0  Class 1  Class 2  True class 0 True class 1 True class 2
               -------  -------  -------  ------------ ------------ ------------
Instance 0:        0.5      1.5      0.1             0            1            0

Here’s the code to represent the data above:

import torch

yhat = torch.Tensor([0.5, 1.5, 0.1])
print(yhat)
# tensor([0.5000, 1.5000, 0.1000])

y = torch.Tensor([0, 1, 0]).to(torch.long)
print(y)
# tensor([0, 1, 0])

loss = torch.nn.CrossEntropyLoss()
cel = loss(input=yhat, target=y)
print(cel)

At this point, I get the following error:

---> 10 cel = loss(input=yhat, target=y)

IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

That error message is incomprehensible and inactionable, in my opinion.

See also a similar problem but in TensorFlow:

What are logits? What is the difference between softmax and softmax_cross_entropy_with_logits?

I had the same issue and since this thread doesn’t provide any clear answer, i will post my solution despite the age of the post.

In the forward() method, you need to return x too.
It needs to look like so:

return F.log_softmax(self.last(up4)), x
Answered By: AKJ

crossEntropy_loss function appears to be accepting a 2D array probably for a batch. In case of single input it should be (1,N) instead of only N elements 1D array.. so you should replace

return self.crossEntropy_loss(probs_flat, targets_flat)

with

return self.crossEntropy_loss(torch.unsqueeze(probs_flat,0), torch.unsqueeze(targets_flat,0))
Answered By: Sushil Surana

If you are using torch.cat() and this problem is happened. use view(1, -1) like this:

x = x.to(device).view(1, -1)
y = y.to(device).view(1, -1)

concat = torch.cat((x, y), 1)
Answered By: Pooya Chavoshi
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.