ValueError: 目标尺寸 (torch.Size([128])) 必须与输入尺寸 (torch.Size([112])) 相同

Posted

技术标签:

【中文标题】ValueError: 目标尺寸 (torch.Size([128])) 必须与输入尺寸 (torch.Size([112])) 相同【英文标题】:ValueError: Target size (torch.Size([128])) must be the same as input size (torch.Size([112])) 【发布时间】:2021-11-12 22:30:32 【问题描述】:

我有一个训练函数,里面有两个向量:

d_labels_a = torch.zeros(128)
d_labels_b = torch.ones(128)

那我有这些功能:

# Compute output
features_a = nets[0](input_a)
features_b = nets[1](input_b)
features_c = nets[2](inputs)

然后域分类器(nets[4])进行预测:

d_pred_a = torch.squeeze(nets[4](features_a))
d_pred_b = torch.squeeze(nets[4](features_b))
d_pred_a = d_pred_a.float()
d_pred_b = d_pred_b.float()
print(d_pred_a.shape)

损失函数中出现错误:` pred_a = torch.squeeze(nets3) pred_b = torch.squeeze(nets3) pred_c = torch.squeeze(nets3)

        loss = criterion(pred_a, labels_a) + criterion(pred_b, labels_b) + criterion(pred_c, labels) + d_criterion(d_pred_a, d_labels_a) + d_criterion(d_pred_b, d_labels_b)

The problem is that d_pred_a/b is different from d_labels_a/b, but only after a certain point. Indeed, when I print the shape of d_pred_a/b it istorch.Size([128])but then it changes totorch.Size([112])` 独立。

它来自这里:

# Compute output
features_a = nets[0](input_a)
features_b = nets[1](input_b)
features_c = nets[2](inputs)

因为如果我打印 features_a 的形状是 torch.Size([128, 2048]) 但它会变成 torch.Size([112, 2048]) nets[0] 是一个 VGG,像这样:

class VGG16(nn.Module):

  def __init__(self, input_size, batch_norm=False):
    super(VGG16, self).__init__()

    self.in_channels,self.in_width,self.in_height = input_size

    self.block_1 = VGGBlock(self.in_channels,64,batch_norm=batch_norm)
    self.block_2 = VGGBlock(64, 128,batch_norm=batch_norm)
    self.block_3 = VGGBlock(128, 256,batch_norm=batch_norm)
    self.block_4 = VGGBlock(256,512,batch_norm=batch_norm)


  @property
  def input_size(self):
      return self.in_channels,self.in_width,self.in_height

  def forward(self, x):

    x = self.block_1(x)
    x = self.block_2(x)
    x = self.block_3(x)
    x = self.block_4(x)
    # x = self.avgpool(x)
    x = torch.flatten(x,1)

    return x

【问题讨论】:

【参考方案1】:

我解决了。问题是最后一批。我在数据加载器中使用了drop_last=True,它成功了。

【讨论】:

以上是关于ValueError: 目标尺寸 (torch.Size([128])) 必须与输入尺寸 (torch.Size([112])) 相同的主要内容,如果未能解决你的问题,请参考以下文章

Keras ValueError:尺寸必须相等问题

Keras ValueError:尺寸必须相等,但对于 'node Equal 输入形状为 2 和 32:[?,2], [?,32,32]

Python | Keras:ValueError:检查目标时出错:预期conv2d_3有4个维度,但得到了有形状的数组(1006,5)

ValueError:目标和输入必须具有相同数量的元素。目标 nelement (50) != 输入 nelement (100)

ValueError:分类指标无法处理多标签指标和二元目标的混合

ValueError:分类指标无法处理多标签指标和连续多输出目标的混合