Pytorch RuntimeError:预期的标量类型 Float 但找到了字节

Posted

技术标签:

【中文标题】Pytorch RuntimeError:预期的标量类型 Float 但找到了字节【英文标题】:Pytorch RuntimeError: expected scalar type Float but found Byte 【发布时间】:2021-02-14 12:35:21 【问题描述】:

我正在研究带有数字的经典示例。我想创建一个我的第一个神经网络来预测数字图像 0,1,2,3,4,5,6,7,8,9 的标签。所以train.txt 的第一列有标签,所有其他列都是每个标签的特征。我已经定义了一个类来导入我的数据:

class DigitDataset(Dataset):
    """Digit dataset."""

    def __init__(self, file_path, transform=None):
        """
        Args:
            csv_file (string): Path to the csv file with annotations.
            root_dir (string): Directory with all the images.
            transform (callable, optional): Optional transform to be applied
                on a sample.
        """
        self.data = pd.read_csv(file_path, header = None, sep =" ")
        self.transform = transform

    def __len__(self):
        return len(self.data)

    def __getitem__(self, idx):
        if torch.is_tensor(idx):
            idx = idx.tolist()

        labels = self.data.iloc[idx,0]
        images = self.data.iloc[idx,1:-1].values.astype(np.uint8).reshape((1,16,16))

        if self.transform is not None:
            sample = self.transform(sample)
        return images, labels

然后我运行这些命令将我的数据集拆分为批次,以定义模型和损失:

train_dataset = DigitDataset("train.txt")
train_loader = DataLoader(train_dataset, batch_size=64,
                        shuffle=True, num_workers=4)

# Model creation with neural net Sequential model
model=nn.Sequential(nn.Linear(256, 128), # 1 layer:- 256 input 128 o/p
                    nn.ReLU(),          # Defining Regular linear unit as activation
                    nn.Linear(128,64),  # 2 Layer:- 128 Input and 64 O/p
                    nn.Tanh(),          # Defining Regular linear unit as activation
                    nn.Linear(64,10),   # 3 Layer:- 64 Input and 10 O/P as (0-9)
                    nn.LogSoftmax(dim=1) # Defining the log softmax to find the probablities 
for the last output unit 
                  ) 

# defining the negative log-likelihood loss for calculating loss
criterion = nn.NLLLoss()

images, labels = next(iter(train_loader))
images = images.view(images.shape[0], -1)

logps = model(images) #log probabilities
loss = criterion(logps, labels) #calculate the NLL-loss

我接受错误:

---------------------------------------------------------------------------
   RuntimeError                              Traceback (most recent call last) 
    <ipython-input-2-7f4160c1f086> in <module>
     47 images = images.view(images.shape[0], -1)
     48 
---> 49 logps = model(images) #log probabilities
     50 loss = criterion(logps, labels) #calculate the NLL-loss

~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, 
*input, **kwargs)
    725             result = self._slow_forward(*input, **kwargs)
    726         else:
--> 727             result = self.forward(*input, **kwargs)
    728         for hook in itertools.chain(
    729                 _global_forward_hooks.values(),

~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/container.py in forward(self, input)
    115     def forward(self, input):
    116         for module in self:
--> 117             input = module(input)
    118         return input
    119 

~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, 
*input, **kwargs)
    725             result = self._slow_forward(*input, **kwargs)
    726         else:
--> 727             result = self.forward(*input, **kwargs)
    728         for hook in itertools.chain(
    729                 _global_forward_hooks.values(),

 ~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/linear.py in forward(self, input)
     91 
     92     def forward(self, input: Tensor) -> Tensor:
---> 93         return F.linear(input, self.weight, self.bias)
     94 
     95     def extra_repr(self) -> str:

 ~/anaconda3/lib/python3.8/site-packages/torch/nn/functional.py in linear(input, weight, bias)
   1688     if input.dim() == 2 and bias is not None:
   1689         # fused op is marginally faster
-> 1690         ret = torch.addmm(bias, input, weight.t())
   1691     else:
   1692         output = input.matmul(weight.t())

RuntimeError: expected scalar type Float but found Byte

你知道什么是错的吗?感谢您的耐心和帮助!

【问题讨论】:

【参考方案1】:

这一行是你的错误的原因:

images = self.data.iloc[idx, 1:-1].values.astype(np.uint8).reshape((1, 16, 16))

imagesuint8 (byte) 而神经网络需要输入作为浮点数来计算梯度(你不能使用整数计算反向传播的梯度,因为它们不是连续的和不可微的) .

您可以使用torchvision.transforms.functional.to_tensor 将图像转换为float[0, 1],如下所示:

import torchvision

images = torchvision.transforms.functional.to_tensor(
    self.data.iloc[idx, 1:-1].values.astype(np.uint8).reshape((1, 16, 16))
)

或简单地除以255 以将值放入[0, 1]

【讨论】:

它工作了,但现在我的损失函数有问题,我不能将torchvision.transforms.functional.to_tensor 应用到labels,因为它们不是图像。实际上我接受了这个错误:RuntimeError: expected scalar type Long but found Double 通过your_tensor.float()将其投射到float 我写了labels = torch.from_numpy(np.array(self.data.iloc[idx,0])).float(),但我一直收到错误消息RuntimeError: expected scalar type Long but found Float 我也尝试了labels.float(),但没有成功。 好的,我通过labels = torch.from_numpy(np.array(self.data.iloc[idx,0])).long() 修复了它。非常感谢您的帮助!【参考方案2】:
import torch
import torchvision
import matplotlib.pyplot as plt
from time import time
from torchvision import datasets, transforms
from torch import nn, optim

transform = transforms.Compose([transforms.ToTensor(),
                              transforms.Normalize((0.5,), (0.5,)),
                              ])

trainset = datasets.MNIST('data/train', download=True, train=True, transform=transform)
valset = datasets.MNIST('data/test', download=True, train=False, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
valloader = torch.utils.data.DataLoader(valset, batch_size=64, shuffle=True)



input_size = 784
hidden_sizes = [128,128,64]
output_size = 10

model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]),
                      nn.ReLU(),
                      nn.Linear(hidden_sizes[0], hidden_sizes[1]),
                      nn.ReLU(),
                      nn.Linear(hidden_sizes[1], hidden_sizes[2]),
                      nn.ReLU(),
                      nn.Linear(hidden_sizes[2], output_size),
                      nn.LogSoftmax(dim=1))
# print(model)

criterion = nn.NLLLoss()
images, labels = next(iter(trainloader))
images = images.view(images.shape[0], -1)

logps = model(images) #log probabilities
loss = criterion(logps, labels) #calculate the NLL loss




optimizer = optim.SGD(model.parameters(), lr=0.003, momentum=0.9)
time0 = time()
epochs = 15
for e in range(epochs):
    running_loss = 0
    for images, labels in trainloader:
        # Flatten MNIST images into a 784 long vector
        images = images.view(images.shape[0], -1)
    
        # Training pass
        optimizer.zero_grad()
        
        output = model(images)
        loss = criterion(output, labels)
        
        #This is where the model learns by backpropagating
        loss.backward()
        
        #And optimizes its weights here
        optimizer.step()
        
        running_loss += loss.item()
    else:
        print("Epoch  - Training loss: ".format(e, running_loss/len(trainloader)))
    print("\nTraining Time (in minutes) =",(time()-time0)/60)
    
    
    
    
    
images, labels = next(iter(valloader))
img = images[0].view(1, 784)
with torch.no_grad():
    logps = model(img)

ps = torch.exp(logps)
probab = list(ps.numpy()[0])
print("Predicted Digit =", probab.index(max(probab)))
# view_classify(img.view(1, 28, 28), ps)





correct_count, all_count = 0, 0
for images,labels in valloader:
  for i in range(len(labels)):
    img = images[i].view(1, 784)
    with torch.no_grad():
        logps = model(img)

    
    ps = torch.exp(logps)
    probab = list(ps.numpy()[0])
    pred_label = probab.index(max(probab))
    true_label = labels.numpy()[i]
    if(true_label == pred_label):
      correct_count += 1
    all_count += 1

print("Number Of Images Tested =", all_count)
print("\nModel Accuracy =", (correct_count/all_count))


torch.save(model, './my_mnist_model.pt') ```

【讨论】:

以上是关于Pytorch RuntimeError:预期的标量类型 Float 但找到了字节的主要内容,如果未能解决你的问题,请参考以下文章

通过 DataLoader (PyTorch) 迭代:RuntimeError: 标量类型 unsigned char 的预期对象但序列元素 9 的标量类型浮点数

pytorch RuntimeError: 标量类型 Double 的预期对象,但得到标量类型 Float

Pytorch RuntimeError:预期的标量类型 Float 但找到了字节

Pytorch RuntimeError:参数#1 'indices' 的预期张量具有标量类型 Long;但得到了 CUDAType

RuntimeError:预期的标量类型 Double 但发现 Float

PyTorch 错误与缩放器浮动的预期对象,而是得到缩放器长的对象