[cnn]cnn训练MINST数据集demo
Posted jinwan
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了[cnn]cnn训练MINST数据集demo相关的知识,希望对你有一定的参考价值。
[cnn]cnn训练MINST数据集demo
tips:
在文件路径进入conda
输入
jupyter nbconvert --to markdown test.ipynb
将ipynb文件转化成markdown文件
jupyter nbconvert --to html test.ipynb
jupyter nbconvert --to pdf test.ipynb
(html,pdf文件同理)
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torch.utils.data as Data
from torchvision import datasets,transforms
import matplotlib.pyplot as plt
import numpy as np
input_size = 28 #图像尺寸 28*28
num_class = 10 #标签总数
num_epochs = 3 #训练总周期
batch_size = 64 #一个批次多少图片
train_dataset = datasets.MNIST(
root=\'data\',
train=True,
transform=transforms.ToTensor(),
download=True,
)
test_dataset = datasets.MNIST(
root=\'data\',
train=False,
transform=transforms.ToTensor(),
download=True,
)
train_loader = torch.utils.data.DataLoader(
dataset = train_dataset,
batch_size = batch_size,
shuffle = True,
)
test_loader = torch.utils.data.DataLoader(
dataset = test_dataset,
batch_size = batch_size,
shuffle = True,
)
class CNN(nn.Module):
def __init__(self):
super(CNN, self).__init__()
self.conv1 = nn.Sequential( #输入为(1,28,28)
nn.Conv2d(
in_channels=1,
out_channels=16, #要得到几个特征图
kernel_size=5, #卷积核大小
stride=1, #步长
padding=2,
), #输出特征图为(16*28*28)
nn.ReLU(),
nn.MaxPool2d(kernel_size=2), #池化(2x2) 输出为(16,14,14)
)
self.conv2 = nn.Sequential( #输入(16,14,14)
nn.Conv2d(16, 32, 5, 1, 2), #输出(32,14,14)
nn.ReLU(),
nn.MaxPool2d(2), #输出(32,7,7)
)
self.out = nn.Linear(32 * 7 * 7, 10) #全连接
def forward(self, x):
x = self.conv1(x)
x = self.conv2(x)
x = x.view(x.size(0), -1) #flatten操作 输出为(batch_size,32*7*7)
output = self.out(x)
return output, x
def accuracy(predictions,labels):
pred = torch.max(predictions.data,1)[1]
rights = pred.eq(labels.data.view_as(pred)).sum()
return rights,len(labels)
device = \'cuda\' if torch.cuda.is_available() else \'cpu\'
device
\'cuda\'
net = CNN().to(device)
criterion = nn.CrossEntropyLoss() #损失函数
#优化器
optimizer = optim.Adam(net.parameters(),lr = 0.001)
for epoch in range(num_epochs+1):
#保留epoch的结果
train_rights = []
for batch_idx,(data,target) in enumerate(train_loader):
data = data.to(device)
target = target.to(device)
net.train()
output = net(data)[0]
loss = criterion(output,target)
optimizer.zero_grad()
loss.backward()
optimizer.step()
right = accuracy(output,target)
train_rights.append(right)
if batch_idx %100 ==0:
net.eval()
val_rights = []
for(data,target) in test_loader:
data = data.to(device)
target = target.to(device)
output = net(data)[0]
right = accuracy(output,target)
val_rights.append(right)
#计算准确率
train_r = (sum([i[0] for i in train_rights]),sum(i[1] for i in train_rights))
val_r = (sum([i[0] for i in val_rights]),sum(i[1] for i in val_rights))
print(\'当前epoch:[/(:.0f%)]\\t损失::.2f\\t训练集准确率::.2f%\\t测试集准确率::.2f%\'.format(
epoch,
batch_idx * batch_size,
len(train_loader.dataset),
100. * batch_idx / len(train_loader),
loss.data,
100. * train_r[0].cpu().numpy() / train_r[1],
100. * val_r[0].cpu().numpy() / val_r[1]
)
)
当前epoch:0[0/60000(0%)] 损失:2.31 训练集准确率:4.69% 测试集准确率:21.01%
当前epoch:0[6400/60000(11%)] 损失:0.51 训练集准确率:75.94% 测试集准确率:91.43%
当前epoch:0[12800/60000(21%)] 损失:0.28 训练集准确率:84.05% 测试集准确率:93.87%
当前epoch:0[19200/60000(32%)] 损失:0.15 训练集准确率:87.77% 测试集准确率:96.42%
当前epoch:0[25600/60000(43%)] 损失:0.08 训练集准确率:89.82% 测试集准确率:97.02%
当前epoch:0[32000/60000(53%)] 损失:0.14 训练集准确率:91.20% 测试集准确率:97.42%
当前epoch:0[38400/60000(64%)] 损失:0.04 训练集准确率:92.13% 测试集准确率:97.59%
当前epoch:0[44800/60000(75%)] 损失:0.08 训练集准确率:92.83% 测试集准确率:97.83%
当前epoch:0[51200/60000(85%)] 损失:0.12 训练集准确率:93.38% 测试集准确率:97.77%
当前epoch:0[57600/60000(96%)] 损失:0.19 训练集准确率:93.81% 测试集准确率:98.24%
当前epoch:1[0/60000(0%)] 损失:0.07 训练集准确率:95.31% 测试集准确率:97.90%
当前epoch:1[6400/60000(11%)] 损失:0.08 训练集准确率:97.96% 测试集准确率:98.27%
当前epoch:1[12800/60000(21%)] 损失:0.10 训练集准确率:97.99% 测试集准确率:98.30%
当前epoch:1[19200/60000(32%)] 损失:0.02 训练集准确率:98.07% 测试集准确率:98.20%
当前epoch:1[25600/60000(43%)] 损失:0.17 训练集准确率:98.09% 测试集准确率:98.40%
当前epoch:1[32000/60000(53%)] 损失:0.12 训练集准确率:98.11% 测试集准确率:98.68%
当前epoch:1[38400/60000(64%)] 损失:0.05 训练集准确率:98.11% 测试集准确率:98.63%
当前epoch:1[44800/60000(75%)] 损失:0.10 训练集准确率:98.14% 测试集准确率:98.70%
当前epoch:1[51200/60000(85%)] 损失:0.04 训练集准确率:98.19% 测试集准确率:98.56%
当前epoch:1[57600/60000(96%)] 损失:0.03 训练集准确率:98.23% 测试集准确率:98.67%
当前epoch:2[0/60000(0%)] 损失:0.06 训练集准确率:98.44% 测试集准确率:98.32%
当前epoch:2[6400/60000(11%)] 损失:0.03 训练集准确率:98.64% 测试集准确率:98.63%
当前epoch:2[12800/60000(21%)] 损失:0.05 训练集准确率:98.70% 测试集准确率:98.62%
当前epoch:2[19200/60000(32%)] 损失:0.01 训练集准确率:98.72% 测试集准确率:98.69%
当前epoch:2[25600/60000(43%)] 损失:0.01 训练集准确率:98.70% 测试集准确率:98.76%
当前epoch:2[32000/60000(53%)] 损失:0.03 训练集准确率:98.70% 测试集准确率:98.76%
当前epoch:2[38400/60000(64%)] 损失:0.07 训练集准确率:98.70% 测试集准确率:98.62%
当前epoch:2[44800/60000(75%)] 损失:0.07 训练集准确率:98.72% 测试集准确率:98.60%
当前epoch:2[51200/60000(85%)] 损失:0.03 训练集准确率:98.71% 测试集准确率:98.99%
当前epoch:2[57600/60000(96%)] 损失:0.05 训练集准确率:98.74% 测试集准确率:98.84%
大数据集导致 CNN 训练超出 RAM
【中文标题】大数据集导致 CNN 训练超出 RAM【英文标题】:CNN training out of RAM cause by big dataset 【发布时间】:2022-01-19 06:07:56 【问题描述】:我有一个包含大约 30000 多张图像的大型图像数据集。当我训练模型时,我的系统内存不足,我不想对数据集进行下采样。有什么办法可以解决这个问题吗?
#set up the inizilize integer
batch_size = 16
img_height = 512
img_width = 512
color_mode = 'rgba'
#split the dataset into training testing and validation
#load the dataset as categorical label type
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
train_dir,
labels='inferred',
label_mode='categorical',
color_mode=color_mode,
validation_split=0.2,
subset="training",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
train_dir,
labels='inferred',
label_mode='categorical',
color_mode=color_mode,
validation_split=0.2,
subset="validation",
seed=123,
image_size=(img_height, img_width),
batch_size=batch_size)
train_ds = train_ds.cache().prefetch(tf.data.AUTOTUNE)
val_ds = val_ds.cache().prefetch(tf.data.AUTOTUNE)
cnn_model = Sequential([
layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 4)),
layers.Conv2D(16, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(32, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
layers.Conv2D(64, 3, padding='same', activation='relu'),
layers.MaxPooling2D(),
#layers.Dropout(0.2),
layers.Flatten(),
layers.Dense(64, activation='relu'),
layers.Dense(5,activation="softmax")
])
cnn_model.compile(
optimizer='adam',
loss=tf.losses.CategoricalCrossentropy(),
metrics=['accuracy','Recall','Precision','AUC']
)
def model_train(model,patience,namemodel):
#call back for earlystopping
callback = tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=patience)
#tensorboard call back for profile
tboard_callback = tf.keras.callbacks.TensorBoard(log_dir = log_dir,
histogram_freq = 1,
profile_batch = '500,520')
model_save_callback = tf.keras.callbacks.ModelCheckpoint(
filepath=save_dir+'pd/'+namemodel,
save_weights_only=False,
monitor='val_loss',
mode='min',
save_best_only=True)
history = model.fit(
train_ds,
validation_data=val_ds,
epochs=1000,
callbacks=[callback,model_save_callback],
batch_size = batch_size
)
return history
history = model_train(cnn_model,30,'cnn_v1'.format(img_height,color_mode,batch_size))
我知道有一种方法可以将 30000 多张图像部分发送给模型。但我不知道该怎么做。或者有没有更好的方法来做到这一点?
【问题讨论】:
最简单的方法就是减少你的batch_size 为什么input_shape=(img_height, img_width, 4)
中有4
?
因为它们是 4 通道 png 图片
【参考方案1】:
当您使用 image_dataset_from_directory 时,将获取图像和标签以进行批量训练。在您的情况下,您将批量大小设置为 16。因此,与加载全部 30000 个相比,每次仅将 16 个图像和标签加载到内存中。如果您仍然遇到内存不足错误,您可以减少批量大小,但除非您有一个非常小的内存,批量大小为 16 应该没问题。您可以考虑减小图像大小。 rgba 格式的 512 X 512 图像需要处理大约 1,000,000 个像素,这将占用大量内存。尝试 256 X 256,它大约是 275K 像素或更好的 128 X 128,它只有大约 65K 像素。我不确定缓存的效果是什么,但我预计它也会增加内存使用量,因为我相信它会在网络训练时将下一批数据提取到内存中。尝试删除这两行代码,看看问题是否消失。
【讨论】:
以上是关于[cnn]cnn训练MINST数据集demo的主要内容,如果未能解决你的问题,请参考以下文章