pytorch torch.nn
Posted leimu
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了pytorch torch.nn相关的知识,希望对你有一定的参考价值。
torch.nn 实现 模型的定义,网络层的定义,损失函数的定义。
import torch
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
N, D_in, H, D_out = 64, 1000, 100, 10
# Create random Tensors to hold inputs and outputs
x = torch.randn(N, D_in)
y = torch.randn(N, D_out)
# Use the nn package to define our model as a sequence of layers. nn.Sequential
# is a Module which contains other Modules, and applies them in sequence to
# produce its output. Each Linear Module computes output from input using a
# linear function, and holds internal Tensors for its weight and bias.
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
)
# The nn package also contains definitions of popular loss functions; in this
# case we will use Mean Squared Error (MSE) as our loss function.
loss_fn = torch.nn.MSELoss(reduction=‘sum‘)
learning_rate = 1e-4
for t in range(500):
# Forward pass: compute predicted y by passing x to the model. Module objects
# override the __call__ operator so you can call them like functions. When
# doing so you pass a Tensor of input data to the Module and it produces
# a Tensor of output data.
y_pred = model(x)
# Compute and print loss. We pass Tensors containing the predicted and true
# values of y, and the loss function returns a Tensor containing the
# loss.
loss = loss_fn(y_pred, y)
print(t, loss.item())
# Zero the gradients before running the backward pass.
model.zero_grad()
# Backward pass: compute gradient of the loss with respect to all the learnable
# parameters of the model. Internally, the parameters of each Module are stored
# in Tensors with requires_grad=True, so this call will compute gradients for
# all learnable parameters in the model.
loss.backward()
# Update the weights using gradient descent. Each parameter is a Tensor, so
# we can access its gradients like we did before.
with torch.no_grad():
for param in model.parameters():
param -= learning_rate * param.grad
上面,我们使用parem= -= learning_rate* param.grad 手动更新参数。
使用torch.optim 自动优化参数。optim这个package提供了各种不同的模型优化方法,包括SGD+momentum, RMSProp, Adam等等。
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
for t in range(500):
y_pred = model(x)
loss = loss_fn(y_pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
以上是关于pytorch torch.nn的主要内容,如果未能解决你的问题,请参考以下文章
PyTorch基础(13)-- torch.nn.Unfold()方法
pytorch 笔记:torch.nn.Linear() VS torch.nn.function.linear()
[Pytorch系列-51]:循环神经网络RNN - torch.nn.RNN类的参数详解与代码示例
pytorch笔记:torch.nn.functional.pad
pytorch二元交叉熵损失函数 nn.BCELoss() 与 torch.nn.BCEWithLogitsLoss()