RuntimeError:梯度计算所需的变量之一已被就地操作修改:PyTorch 错误
Posted
技术标签:
【中文标题】RuntimeError:梯度计算所需的变量之一已被就地操作修改:PyTorch 错误【英文标题】:RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: PyTorch error 【发布时间】:2021-03-04 02:00:03 【问题描述】:我正在尝试在 PyTorch 中运行一些代码,但此时我被堆积了:
在第一次迭代中,判别器和生成器的后向操作都运行良好
....
self.G_loss.backward(retain_graph=True)
self.D_loss.backward()
...
在第二次迭代中,当self.G_loss.backward(retain_graph=True)
执行时,我收到此错误:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [8192, 512]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
根据torch.autograd.set_detect_anomaly
,判别器网络中的以下最后一行负责此:
bottleneck = bottleneck[:-1]
self.embedding = x.view(x.size(0), -1)
self.logit = self.layers[-1](self.embedding)
奇怪的是,我在其他正常工作的代码中使用了该网络架构。有什么建议吗?
完整的错误:
site-packages\torch\autograd\__init__.py", line 127, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [8192, 512]] is at version 2; expected version 1 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
【问题讨论】:
【参考方案1】:通过删除带有loss += loss_val
行的代码解决
【讨论】:
以上是关于RuntimeError:梯度计算所需的变量之一已被就地操作修改:PyTorch 错误的主要内容,如果未能解决你的问题,请参考以下文章
RuntimeError:梯度计算所需的变量之一已被就地操作修改:PyTorch 错误
Pytorch LSTM-VAE Sentence Generator:RuntimeError:梯度计算所需的变量之一已被就地操作修改