PyTorch:随机数据加载器
Posted
技术标签:
【中文标题】PyTorch:随机数据加载器【英文标题】:PyTorch: Shuffle DataLoader 【发布时间】:2021-03-31 19:24:58 【问题描述】:有几个场景让我对改组数据加载器感到困惑,如下所示。
我在 train_loader 和 valid_loader 上都将“shuffle”参数设置为 False。那么我得到的结果如下
Epoch 1/4 loss=0.8802 val_loss=0.8202 train_acc=0.63 val_acc=0.63
Epoch 2/4 loss=0.6993 val_loss=0.6500 train_acc=0.66 val_acc=0.72
Epoch 3/4 loss=0.5363 val_loss=0.5385 train_acc=0.76 val_acc=0.80
Epoch 4/4 loss=0.4055 val_loss=0.5130 train_acc=0.85 val_acc=0.81
我在 train_loader 上将“shuffle”参数设置为 True,将 valid_loader 设置为 False。那么我得到的结果如下
Epoch 1/4 loss=0.8928 val_loss=0.8284 train_acc=0.63 val_acc=0.63
Epoch 2/4 loss=0.7308 val_loss=0.6263 train_acc=0.61 val_acc=0.73
Epoch 3/4 loss=0.5594 val_loss=0.5046 train_acc=0.54 val_acc=0.81
Epoch 4/4 loss=0.4304 val_loss=0.4525 train_acc=0.49 val_acc=0.82
基于该结果,当我对 train_loader 进行 shuffle 时,我的训练准确度表现更差。
这是我的代码的 sn-p。
for epoch in range(n_epochs):
model.train()
avg_loss = 0.
train_preds = np.zeros((len(train_X),len(le.classes_)))
for i, (x_batch, y_batch) in enumerate(train_loader):
y_pred = model(x_batch)
loss = loss_fn(y_pred, y_batch)
optimizer.zero_grad()
loss.backward()
optimizer.step()
avg_loss += loss.item() / len(train_loader)
train_preds[i * batch_size:(i+1) * batch_size] = F.softmax(y_pred).cpu().detach().numpy()
train_accuracy = sum(train_preds.argmax(axis=1) == y_train)/len(y_train)
model.eval()
avg_val_loss = 0.
val_preds = np.zeros((len(x_cv),len(le.classes_)))
for i, (x_batch, y_batch) in enumerate(valid_loader):
y_pred = model(x_batch).detach()
avg_val_loss += loss_fn(y_pred, y_batch).item() / len(valid_loader)
val_preds[i * batch_size:(i+1) * batch_size] =F.softmax(y_pred).cpu().numpy()
val_accuracy = sum(val_preds.argmax(axis=1)==y_test)/len(y_test)
我是否在计算训练准确度时出错了?提前致谢
【问题讨论】:
【参考方案1】:您正在将打乱的预测与未打乱的标签进行比较。要解决这个问题,请计算每次迭代中准确预测的数量,并在最后计算总体准确度。
for epoch in range(n_epochs):
model.train()
avg_loss = 0.
total_correct = 0
total_samples = 0
for i, (x_batch, y_batch) in enumerate(train_loader):
y_pred = model(x_batch)
loss = loss_fn(y_pred, y_batch)
optimizer.zero_grad()
loss.backward()
optimizer.step()
avg_loss += loss.item() / len(train_loader)
total_correct += (torch.argmax(y_pred, 1) == y_batch).sum()
total_samples += y_batch.shape[0]
train_accuracy = total_correct / total_samples
(我没有测试过这段代码)
【讨论】:
非常感谢。我已经测试了该代码并获得了很好的结果:)以上是关于PyTorch:随机数据加载器的主要内容,如果未能解决你的问题,请参考以下文章