自定义权重初始化导致错误 - pytorch

Posted

技术标签:

【中文标题】自定义权重初始化导致错误 - pytorch【英文标题】:Custom weight initialisation causing error - pytorch 【发布时间】:2019-07-21 12:29:41 【问题描述】:
%reset -f

import torch
import torch.nn as nn
import torchvision
import torchvision.transforms as transforms
import numpy as np
import matplotlib.pyplot as plt
import torch.utils.data as data_utils
import torch.nn as nn
import torch.nn.functional as F

num_epochs = 20

x1 = np.array([0,0])
x2 = np.array([0,1])
x3 = np.array([1,0])
x4 = np.array([1,1])

num_epochs = 200

x = torch.tensor([x1,x2,x3,x4]).float()
y = torch.tensor([0,1,1,0]).long()

train = data_utils.TensorDataset(x,y)
train_loader = data_utils.DataLoader(train , batch_size=2 , shuffle=True)

device = 'cpu'

input_size = 2
hidden_size = 100 
num_classes = 2

learning_rate = .0001

torch.manual_seed(24)

def weights_init(m):
    m.weight.data.normal_(0.0, 1)

class NeuralNet(nn.Module) : 
    def __init__(self, input_size, hidden_size, num_classes) : 
        super(NeuralNet, self).__init__()
        self.fc1 = nn.Linear(input_size , hidden_size)
        self.relu = nn.ReLU()
        self.fc2 = nn.Linear(hidden_size , num_classes)

    def forward(self, x) : 
        out = self.fc1(x)
        out = self.relu(out)
        out = self.fc2(out)
        return out

model = NeuralNet(input_size, hidden_size, num_classes).to(device)
model.apply(weights_init)

criterionCE = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)

for i in range(0 , 1) :

        total_step = len(train_loader)
        for epoch in range(num_epochs) : 
            for i,(images , labels) in enumerate(train_loader) : 
                images = images.to(device)
                labels = labels.to(device)

                outputs = model(images)
                loss = criterionCE(outputs , labels)

                optimizer.zero_grad()
                loss.backward()
                optimizer.step()

        outputs = model(x)

        print(outputs.data.max(1)[1])

我用来初始化权重:

def weights_init(m):
    m.weight.data.normal_(0.0, 1)

但抛出以下错误:

~/anaconda3/envs/pytorch/lib/python3.7/site-packages/torch/nn/modules/module.py in __getattr__(self, name)
    533                 return modules[name]
    534         raise AttributeError("'' object has no attribute ''".format(
--> 535             type(self).__name__, name))
    536 
    537     def __setattr__(self, name, value):

AttributeError: 'ReLU' object has no attribute 'weight'

这是初始化权重的正确方法吗?

另外,对象的类型应该是 nn.Module ,而不是 Relu 吗?

【问题讨论】:

【参考方案1】:

您正在尝试设置无权重层 (ReLU) 的权重。

weights_init 中,您应该在初始化权重之前检查层的类型。例如:

def weights_init(m):
    if type(m) == nn.Linear:
        m.weight.data.normal_(0.0, 1)

见How to initialize weights in PyTorch?。

【讨论】:

【参考方案2】:

除了 Fabio 提到的关于检查层类型和 ReLU 是激活层而不是可训练层,因为它是关于初始化的,您可以在 __init__ 方法本身中进行权重初始化,就像这里所做的那样:

https://github.com/pytorch/vision/blob/master/torchvision/models/vgg.py

def __init__(self, features, num_classes=1000,...):
        ----snip---
    self._initialize_weights()

def _initialize_weights(self):
    if isinstance(m, nn.Linear):
        m.weight.data.normal_(0.0, 1)

【讨论】:

以上是关于自定义权重初始化导致错误 - pytorch的主要内容,如果未能解决你的问题,请参考以下文章

PyTorch 模型层权重如何被隐式初始化? [复制]

Pytorch:AG接口,BBIN接口,MG接口,PT接口对接平台权重初始化

-pytorch实现深度神经网络与训练

pytorch 中 conv 的默认权重初始化器是啥?

pytorch-04-激活函数

在 PyTorch 中,默认情况下如何初始化层权重和偏差?