PyTorch - autograd自动微分
Posted SpikeKing
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了PyTorch - autograd自动微分相关的知识,希望对你有一定的参考价值。
论文:Automatic Differentiation in Machine Learning: a Survey,自动微分
参考 PyTorch:AUTOMATIC DIFFERENTIATION WITH TORCH.AUTOGRAD
Loss是标量,雅可比向量积,JVP,
primitive operation(原始操作):
torch.autograd(),计算梯度
import torch
x = torch.ones(5) # input tensor, 输入向量
print(f"x: x")
y = torch.zeros(3) # expected output, 标签
print(f"y: y")
w = torch.randn(5, 3, requires_grad=True) # 开启自动微分
print(f"w: w")
b = torch.randn(3, requires_grad=True)
print(f"b: b")
z = torch.matmul(x, w)+b
loss = torch.nn.functional.binary_cross_entropy_with_logits(z, y)
使用梯度反向传播算法,back propagation
backward()是Tensor类的方法,loss是标量直接调用backward(),loss如果是张量,则backward()需要传入张量
print(f"Gradient function for z = z.grad_fn")
print(f"Gradient function for loss = loss.grad_fn")
loss.backward()
print(w.grad)
print(b.grad)
retain_graph=True,保留图,不保留的话,第2次调用会报错:RuntimeError: Trying to backward through the graph a second time
torch.no_grad()关闭自动求导:
z = torch.matmul(x, w)+b
print(z.requires_grad)
with torch.no_grad():
z = torch.matmul(x, w)+b
print(z.requires_grad)
z = z.detach()之后,z.requires_grad是False
z = z.detach()
z.requires_grad
DAG:directed acyclic graph,有向无环图
张量loss,输入torch.ones_like(inp),反向传播
retain_graph保留图,可以连续backward()
梯度置0,np.grad.zero_()
inp = torch.eye(5, requires_grad=True)
out = (inp+1).pow(2)
print(out)
out.backward(torch.ones_like(inp), retain_graph=True)
print(f"First call\\ninp.grad")
out.backward(torch.ones_like(inp), retain_graph=True)
print(f"\\nSecond call\\ninp.grad")
inp.grad.zero_()
out.backward(torch.ones_like(inp), retain_graph=True)
print(f"\\nCall after zeroing gradients\\ninp.grad")
以上是关于PyTorch - autograd自动微分的主要内容,如果未能解决你的问题,请参考以下文章