python [火车pytorch在p100] lstm模板

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了python [火车pytorch在p100] lstm模板相关的知识,希望对你有一定的参考价值。

class lstm(nn.Module):
	
	def __init__(self,
                     input_dim=None,
		     hidden_dim=None,
		     output_dim=None):
		
            super(lstm, self).__init__()
            self.input_dim = input_dim
	    self.hidden_dim = hidden_dim
            self.output_dim = output_dim
	    self.initial_hidden = (None, None)
            self.lstmcell = nn.LSTM(input_size=input_dim,
				    hidden_size=hidden_dim,
				    batch_first=True)
	    self.fc = nn.Linear(hidden_dim, output_dim)

	def forward(self, x):
	    h0 = Variable(torch.zeros(1, x.size(0), self.hidden_dim).cuda())
	    c0 = Variable(torch.zeros(1, x.size(0), self.hidden_dim).cuda())
	    self.initial_hidden = (h0, c0)
	    output, _ = self.lstmcell(x, self.initial_hidden)
	    y = self.fc(output[:, -1, :])
	    return F.log_softmax(y)


train_batch_size = 128
valid_batch_size = 1000
test_batch_size = 1000


train_dataloader = DataLoader(dataset=train_dataset,
			      batch_size=train_batch_size,
			      shuffle=True,
			      num_workers=1,
			      drop_last=True)


model = lstm(input_dim=28,
	     hidden_dim=128,
             output_dim=28)
model = nn.DataParallel(model)


for epoch_index in range(5):
    model.train()
    error_sum = 0.
    for batch_index, (inputs, labels) in enumerate(train_dataloader):
        inputs, labels = Variable(inputs.cuda()), Variable(labels.cuda())
	optimizer.zero_grad()
	outputs = model(inputs)
	error = F.nll_loss(outputs, labels)
	error_sum += batch_average_error.data[0]
	error.backward()
	optimizer.step()

以上是关于python [火车pytorch在p100] lstm模板的主要内容,如果未能解决你的问题,请参考以下文章

使用 PyTorch 加载自定义图像数据集

Keras + tensorflow + P100:cudaErrorNotSupported = 71 错误

如何在 NVIDIA P100 上启用 WDDM?

UNR #1火车管理

P100 单源最短路问题 Bellman-Ford 算法

OVIRT安装NVIDIA- P100实现GPU虚拟化