将 pytorch float Sigmoid 结果转换为标签

Posted

技术标签:

【中文标题】将 pytorch float Sigmoid 结果转换为标签【英文标题】:Convert pytorch float Sigmoid results to labels 【发布时间】:2021-12-05 09:54:12 【问题描述】:

我正在尝试使用Pytorch 制作细分模型并实现自定义IoULoss

def IoULoss(inputs, targets, smooth=1e-6):
    inputs = (inputs.view(inputs.size(0), -1) > 0.5)
    targets = targets.view(targets.size(0), -1)
    intersection = (inputs & targets).float().sum(1)
    union = (inputs | targets).float().sum(1)
    IoU = (intersection + smooth) / (union + smooth)
    return 1 - IoU.mean()

但是当我训练模型时,我得到了错误:

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

有什么好的方法可以将我的预测转换为标签吗?

完整的错误回溯:

RuntimeError                              Traceback (most recent call last)
<ipython-input-53-3bfc1b43c8ba> in <module>()
----> 1 my_train(model, 30, torch.optim.Adam(model.parameters(), lr=0.01), IoULoss, train_loader)

2 frames
<ipython-input-41-ebe9c66b1806> in my_train(clf, epochs, optimizer, criterion, train_data, test_data)
     22             epoch_loss += loss.item()
     23 
---> 24             loss.backward()
     25             optimizer.step()
     26 

/usr/local/lib/python3.7/dist-packages/torch/_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs)
    253                 create_graph=create_graph,
    254                 inputs=inputs)
--> 255         torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
    256 
    257     def register_hook(self, hook):

/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
    147     Variable._execution_engine.run_backward(
    148         tensors, grad_tensors_, retain_graph, create_graph, inputs,
--> 149         allow_unreachable=True, accumulate_grad=True)  # allow_unreachable flag
    150 
    151 

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

模型推理:

def my_train(clf, epochs, optimizer, criterion, train_data, test_data=None):

    cur_min_loss = 10e8
    train_losses = []

    for epoch_step in range(epochs):

        epoch_loss = 0.0

        for i, batch in enumerate(train_data):

            X, y = batch

            optimizer.zero_grad()
            prediction = clf(X)
            loss = criterion(prediction, y)
            epoch_loss += loss.item()

            loss.backward()
            optimizer.step()

            del prediction
            del X
            del y
            torch.cuda.empty_cache()

        train_losses.append(epoch_loss / (i + 1))

标准是IoULossclf最终激活为Sigmoid;优化器 Adam, train_data - 继承自 PyTorch Dataset 的自定义数据集

【问题讨论】:

请发布完整的错误回溯 @ayandas 添加回溯 你能展示你的模型推理的代码吗? @Ivan 添加推理 【参考方案1】:

损失函数中的第一个表达式:

inputs.view(inputs.size(0), -1) > 0.5

不是可微算子,因此梯度不能通过该操作传播。

【讨论】:

以上是关于将 pytorch float Sigmoid 结果转换为标签的主要内容,如果未能解决你的问题,请参考以下文章

[Intro to Deep Learning with PyTorch -- L2 -- N14] Sigmoid function

小白学习之pytorch框架-多层感知机(MLP)-(tensorvariable计算图ReLU()sigmoid()tanh())

pytorch-04-激活函数

NNDL 作业3:分别使用numpy和pytorch实现FNN例题

神经网络与深度学习 作业3:分别使用numpy和pytorch实现FNN例题

PyTorch学习基础知识二