RuntimeError:尺寸超出范围(预计在 [-1, 0] 范围内,但得到 1)

Posted

技术标签:

【中文标题】RuntimeError:尺寸超出范围(预计在 [-1, 0] 范围内,但得到 1)【英文标题】:RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1) 【发布时间】:2018-06-30 19:49:59 【问题描述】:

我正在使用 Pytorch Unet 模型,我将图像作为输入提供给该模型,同时我将标签作为输入图像掩码提供并在其上训练数据集。 我从其他地方获得的 Unet 模型,我使用交叉熵损失作为损失函数,但我得到这个维度超出范围错误,

RuntimeError                              
Traceback (most recent call last)
<ipython-input-358-fa0ef49a43ae> in <module>()
     16 for epoch in range(0, num_epochs):
     17     # train for one epoch
---> 18     curr_loss = train(train_loader, model, criterion, epoch, num_epochs)
     19 
     20     # store best loss and save a model checkpoint

<ipython-input-356-1bd6c6c281fb> in train(train_loader, model, criterion, epoch, num_epochs)
     16         # measure loss
     17         print (outputs.size(),labels.size())
---> 18         loss = criterion(outputs, labels)
     19         losses.update(loss.data[0], images.size(0))
     20 

/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in     _ _call__(self, *input, **kwargs)
    323         for hook in self._forward_pre_hooks.values():
    324             hook(self, input)
--> 325         result = self.forward(*input, **kwargs)
    326         for hook in self._forward_hooks.values():
    327             hook_result = hook(self, input, result)

<ipython-input-355-db66abcdb074> in forward(self, logits, targets)
      9         probs_flat = probs.view(-1)
     10         targets_flat = targets.view(-1)
---> 11         return self.crossEntropy_loss(probs_flat, targets_flat)

/usr/local/lib/python3.5/dist-packages/torch/nn/modules/module.py in     __call__(self, *input, **kwargs)
    323         for hook in self._forward_pre_hooks.values():
    324             hook(self, input)
  --> 325         result = self.forward(*input, **kwargs)
    326         for hook in self._forward_hooks.values():
    327             hook_result = hook(self, input, result)

/usr/local/lib/python3.5/dist-packages/torch/nn/modules/loss.py in f orward(self, input, target)
    599         _assert_no_grad(target)
    600         return F.cross_entropy(input, target, self.weight, self.size_average,
--> 601                                self.ignore_index, self.reduce)
    602 
    603 

/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py in     cross_entropy(input, target, weight, size_average, ignore_index, reduce)
   1138         >>> loss.backward()
   1139     """
-> 1140     return nll_loss(log_softmax(input, 1), target, weight, size_average, ignore_index, reduce)
   1141 
   1142 

/usr/local/lib/python3.5/dist-packages/torch/nn/functional.py in     log_softmax(input, dim, _stacklevel)
    784     if dim is None:
    785         dim = _get_softmax_dim('log_softmax', input.dim(),      _stacklevel)
--> 786     return torch._C._nn.log_softmax(input, dim)
    787 
    788 

RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1)

我的部分代码如下所示

class crossEntropy(nn.Module):
    def __init__(self, weight = None, size_average = True):
        super(crossEntropy, self).__init__()
        self.crossEntropy_loss = nn.CrossEntropyLoss(weight, size_average)
        
    def forward(self, logits, targets):
        probs = F.sigmoid(logits)
        probs_flat = probs.view(-1)
        targets_flat = targets.view(-1)
        return self.crossEntropy_loss(probs_flat, targets_flat)


class UNet(nn.Module):
    def __init__(self, imsize):
        super(UNet, self).__init__()
        self.imsize = imsize

        self.activation = F.relu
        
        self.pool1 = nn.MaxPool2d(2)
        self.pool2 = nn.MaxPool2d(2)
        self.pool3 = nn.MaxPool2d(2)
        self.pool4 = nn.MaxPool2d(2)
        self.conv_block1_64 = UNetConvBlock(4, 64)
        self.conv_block64_128 = UNetConvBlock(64, 128)
        self.conv_block128_256 = UNetConvBlock(128, 256)
        self.conv_block256_512 = UNetConvBlock(256, 512)
        self.conv_block512_1024 = UNetConvBlock(512, 1024)

        self.up_block1024_512 = UNetUpBlock(1024, 512)
        self.up_block512_256 = UNetUpBlock(512, 256)
        self.up_block256_128 = UNetUpBlock(256, 128)
        self.up_block128_64 = UNetUpBlock(128, 64)

        self.last = nn.Conv2d(64, 2, 1)


    def forward(self, x):
        block1 = self.conv_block1_64(x)
        pool1 = self.pool1(block1)

        block2 = self.conv_block64_128(pool1)
        pool2 = self.pool2(block2)

        block3 = self.conv_block128_256(pool2)
        pool3 = self.pool3(block3)

        block4 = self.conv_block256_512(pool3)
        pool4 = self.pool4(block4)

        block5 = self.conv_block512_1024(pool4)

        up1 = self.up_block1024_512(block5, block4)

        up2 = self.up_block512_256(up1, block3)

        up3 = self.up_block256_128(up2, block2)

        up4 = self.up_block128_64(up3, block1)

        return F.log_softmax(self.last(up4))

【问题讨论】:

【参考方案1】:

根据你的代码:

probs_flat = probs.view(-1)
targets_flat = targets.view(-1)
return self.crossEntropy_loss(probs_flat, targets_flat)

您将两个一维张量提供给 nn.CrossEntropyLoss,但根据 documentation,它期望:

Input: (N,C) where C = number of classes
Target: (N) where each value is 0 <= targets[i] <= C-1
Output: scalar. If reduce is False, then (N) instead.

我相信这是您遇到问题的原因。

【讨论】:

@Wasi 您能解释一下错误消息中的“预计在 [-1,0] 范围内...”部分吗?我觉得这有点神秘......谢谢 回复:错误信息,github.com/pytorch/pytorch/issues/5554#issuecomment-370456868【参考方案2】:

问题是您在分类问题中向torch.nn.CrossEntropyLoss 传递了错误的参数。

具体来说,在这一行

---> 18         loss = criterion(outputs, labels)

参数labels 不是CrossEntropyLoss 所期望的。 labels 应该是一维数组。该数组的长度应该是与代码中的outputs 匹配的批量大小。每个元素的值应该是从 0 开始的目标类 ID。

这是一个例子。

假设您的批量大小为B=2,并且每个数据实例都被赋予K=3 类之一。

进一步,假设您的神经网络的最后一层为您的批次中的两个实例中的每一个输出以下原始 logits(softmax 之前的值)。每个数据实例的这些 logits 和真实标签如下所示。

                Logits (before softmax)
               Class 0  Class 1  Class 2    True class
               -------  -------  -------    ----------
Instance 0:        0.5      1.5      0.1             1
Instance 1:        2.2      1.3      1.7             2

那么为了正确调用CrossEntropyLoss,你需要两个变量:

input 形状为 (B, K),包含 logit 值 形状为Btarget 包含真实类的索引

以下是如何正确使用CrossEntropyLoss 与上述值。我正在使用torch.__version__ 1.9.0。

import torch

yhat = torch.Tensor([[0.5, 1.5, 0.1], [2.2, 1.3, 1.7]])
print(yhat)
# tensor([[0.5000, 1.5000, 0.1000],
#         [2.2000, 1.3000, 1.7000]])

y = torch.Tensor([1, 2]).to(torch.long)
print(y)
# tensor([1, 2])

loss = torch.nn.CrossEntropyLoss()
cel = loss(input=yhat, target=y)
print(cel)
# tensor(0.8393)

我猜你最初收到的错误

RuntimeError: dimension out of range (expected to be in range of [-1, 0], but got 1)

可能 发生是因为您试图计算一个数据实例的交叉熵损失,其中目标被编码为 one-hot。您的数据可能是这样的:

                Logits (before softmax)
               Class 0  Class 1  Class 2  True class 0 True class 1 True class 2
               -------  -------  -------  ------------ ------------ ------------
Instance 0:        0.5      1.5      0.1             0            1            0

这是表示上述数据的代码:

import torch

yhat = torch.Tensor([0.5, 1.5, 0.1])
print(yhat)
# tensor([0.5000, 1.5000, 0.1000])

y = torch.Tensor([0, 1, 0]).to(torch.long)
print(y)
# tensor([0, 1, 0])

loss = torch.nn.CrossEntropyLoss()
cel = loss(input=yhat, target=y)
print(cel)

此时,我收到以下错误:

---> 10 cel = loss(input=yhat, target=y)

IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

在我看来,该错误消息是不可理解且不可操作的。

另见类似问题,但在 TensorFlow 中:

What are logits? What is the difference between softmax and softmax_cross_entropy_with_logits?

【讨论】:

我喜欢这个答案。它也应该被接受。【参考方案3】:

我有同样的问题,由于这个帖子没有提供任何明确的答案,我会发布我的解决方案,尽管帖子已经过时了。

forward()方法中,你也需要返回x。 它需要看起来像这样:

return F.log_softmax(self.last(up4)), x

【讨论】:

以上是关于RuntimeError:尺寸超出范围(预计在 [-1, 0] 范围内,但得到 1)的主要内容,如果未能解决你的问题,请参考以下文章

来自 Ghostscript 的页面超出范围错误的 PDF 尺寸

偶尔的“切片超出范围”恐慌

IndexError:索引 -9223372036854775808 超出尺寸 2 的维度 1 的范围

RuntimeError:在运行 openerp 时调用 Python 对象时超出了最大递归深度

MongoEngine 0.5:RuntimeError:调用 Python 对象时超出最大递归深度

PyQt4 python中的“RuntimeError:调用Python对象时超出最大递归深度”错误