可重复使用的Pytorch结果和随机种子
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了可重复使用的Pytorch结果和随机种子相关的知识,希望对你有一定的参考价值。
我有一个简单的玩具NN与Pytorch。我正在设置我可以在文档中找到的所有种子以及numpy随机。
如果我从上到下运行下面的代码,结果似乎是可重现的。
但是,如果我只运行一次块1然后每次运行块2,结果会发生变化(有时是显着的)。我不确定为什么会发生这种情况,因为每次重新初始化网络并重置优化器。
我使用的是0.4.0版本
BLOCK #1
from __future__ import division
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import torch
import torch.utils.data as utils_data
from torch.autograd import Variable
from torch import optim, nn
from torch.utils.data import Dataset
import torch.nn.functional as F
from torch.nn.init import xavier_uniform_, xavier_normal_,uniform_
torch.manual_seed(123)
import random
random.seed(123)
from sklearn.datasets import load_boston
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import train_test_split
%matplotlib inline
cuda=True #set to true uses GPU
if cuda:
torch.cuda.manual_seed(123)
#load boston data from scikit
boston = load_boston()
x=boston.data
y=boston.target
y=y.reshape(y.shape[0],1)
#train and test
x_train, x_test, y_train, y_test = train_test_split(x,y,test_size=0.3, random_state=123, shuffle=False)
#change to tensors
x_train = torch.from_numpy(x_train)
y_train = torch.from_numpy(y_train)
#create dataset and use data loader
training_samples = utils_data.TensorDataset(x_train, y_train)
data_loader_trn = utils_data.DataLoader(training_samples, batch_size=64,drop_last=False)
#change to tensors
x_test = torch.from_numpy(x_test)
y_test = torch.from_numpy(y_test)
#create dataset and use data loader
testing_samples = utils_data.TensorDataset(x_test, y_test)
data_loader_test = utils_data.DataLoader(testing_samples, batch_size=64,drop_last=False)
#simple model
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
#all the layers
self.fc1 = nn.Linear(x.shape[1], 20)
xavier_uniform_(self.fc1.weight.data) #this is how you can change the weight init
self.drop = nn.Dropout(p=0.5)
self.fc2 = nn.Linear(20, 1)
def forward(self, x):
x = F.relu(self.fc1(x))
x= self.drop(x)
x = self.fc2(x)
return x
BLOCK #2
net=Net()
if cuda:
net.cuda()
# create a stochastic gradient descent optimizer
optimizer = optim.Adam(net.parameters())
# create a loss function (mse)
loss = nn.MSELoss(size_average=False)
# run the main training loop
epochs =20
hold_loss=[]
for epoch in range(epochs):
cum_loss=0.
cum_records_epoch =0
for batch_idx, (data, target) in enumerate(data_loader_trn):
tr_x, tr_y = data.float(), target.float()
if cuda:
tr_x, tr_y = tr_x.cuda(), tr_y.cuda()
# Reset gradient
optimizer.zero_grad()
# Forward pass
fx = net(tr_x)
output = loss(fx, tr_y) #loss for this batch
cum_loss += output.item() #accumulate the loss
# Backward
output.backward()
# Update parameters based on backprop
optimizer.step()
cum_records_epoch +=len(tr_x)
if batch_idx % 1 == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)] Loss: {:.6f}'.format(
epoch, cum_records_epoch, len(data_loader_trn.dataset),
100. * (batch_idx+1) / len(data_loader_trn), output.item()))
print('Epoch average loss: {:.6f}'.format(cum_loss/cum_records_epoch))
hold_loss.append(cum_loss/cum_records_epoch)
#training loss
plt.plot(np.array(hold_loss))
plt.show()
Possible Reason
不知道“有时是戏剧性的差异”是什么,很难肯定回答;但运行[block_1 x1; block_2 x1] xN
(读“运行block_1
然后block_2
一次;并重复两次操作N
次)和[block_1 x1; block_2 xN] x1
有不同的结果是有道理的,假设伪随机数生成器(PRNGs)和种子工作。
在第一种情况下,您在每个block_1
之后重新初始化block_2
中的PRNG,因此N
的每个block_2
实例将访问相同的伪随机数序列,之前由每个block_1
播种。
在第二种情况下,PRNG仅通过单个block_1
运行初始化一次。所以block_2
的每个实例都会有不同的随机值。
(有关PRNG和种子的更多信息,您可以查看:random.seed(): What does it do?)
Simplified Example
让我们假设numpy / CUDA / pytorch实际上使用了一个非常差的PRNG,它只返回递增的值(即PRNG(x_n) = PRNG(x_(n-1)) + 1
,x_0 = seed
)。如果你使用0
播种这个生成器,它将返回1
第一次random()
调用,2
第二次调用,等等。
现在,为了示例,还要简化块:
def block_1():
seed = 0
print("seed: {}".format(seed))
prng.seed(seed)
--
def block_2():
res = "random results:"
for i in range(4):
res += " {}".format(prng.random())
print(res)
让我们将[block_1 x1; block_2 x1] xN
和[block_1 x1; block_2 xN] x1
与N=3
进行比较:
for i in range(3):
block_1()
block_2()
# > seed: 0
# > random results: 1 2 3 4
# > seed: 0
# > random results: 1 2 3 4
# > seed: 0
# > random results: 1 2 3 4
block_1()
for i in range(3):
block_2()
# > seed: 0
# > random results: 1 2 3 4
# > random results: 4 5 6 7
# > random results: 8 9 10 11
以上是关于可重复使用的Pytorch结果和随机种子的主要内容,如果未能解决你的问题,请参考以下文章
SQL Server 中的伪随机可重复排序(不是 NEWID() 和 RAND())