人工智能--Keras卷积神经网络
Posted Abro.
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了人工智能--Keras卷积神经网络相关的知识,希望对你有一定的参考价值。
理论基础:
卷积神经网络
目的与要求:
- 掌握Keras构建卷积神经网络的主要步骤。
- 理解卷积操作。
操作内容:
修改如下代码,对cifar10数据库,调整网络结构为LeNet,优化算法及其学习率,批量大小batch_size,迭代的代数epoch,分析相应的结果。
要修改的代码:
# In[1]:读取数据
from keras.datasets import mnist
from keras import utils
(x_train, y_train), (x_test, y_test) = mnist.load_data()
y_train=utils.to_categorical(y_train,num_classes=10)
y_test=utils.to_categorical(y_test,num_classes=10)
x_train,x_test = x_train/255.0, x_test/255.0
# In[2]:构造网络
from keras import Sequential,layers,optimizers
model = Sequential( [
layers.Reshape((28,28,1),input_shape=(28,28)), # 二维卷积操作的输入数据要求:[样本数,宽度,高度,通道数]
layers.Conv2D(32, kernel_size=(3, 3), activation="relu"), # 3x3的卷积核,输出32个通道
layers.MaxPooling2D(pool_size=(2, 2)), # 取2x2网格的最大值进行下采样
layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
layers.Flatten(), # 把上一层得到的结果展平成一维向量(3*3*64=576)
layers.Dropout(0.5), # 训练时,每个batch随机选50%的权重固定不更新
layers.Dense(64, activation="relu"),
layers.Dense(10, activation="softmax"),
])
model.summary()
#optimizer = optimizers.SGD(lr=0.5) # 对这个例子,SGD的识别率低一点
optimizer = optimizers.Adam(lr=0.001)
#optimizer = optimizers.RMSprop(lr=0.001)
model.compile(optimizer,loss='categorical_crossentropy', metrics=['accuracy'])
# In[3]:训练和测试
model.fit(x_train, y_train, batch_size=64, epochs=1)
loss, accuracy = model.evaluate(x_test, y_test)
过程描述:
调整网络结构为LeNet,把学习率改为0.005,批量大小改为32,迭代次数改为15次,可得到如下结果:
最后的识别率能达到0.7328。
源码:
# -*- coding: utf-8 -*-
"""
Spyder Editor
This is a temporary script file.
"""
# In[1]:读取数据
from tensorflow.keras.datasets import cifar10
from tensorflow.keras import utils
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
y_train=utils.to_categorical(y_train,num_classes=10)
y_test=utils.to_categorical(y_test,num_classes=10)
x_train,x_test = x_train/255.0, x_test/255.0
# In[2]:构造网络
from tensorflow.keras import Sequential,layers,optimizers
model = Sequential( [
layers.Reshape((32,32,3),input_shape=(32,32,3)), # 二维卷积操作的输入数据要求:[样本数,宽度,高度,通道数]
layers.Conv2D(32, kernel_size=(3, 3), activation="relu"), # 3x3的卷积核,输出32个通道
layers.MaxPooling2D(pool_size=(2, 2)), # 取2x2网格的最大值进行下采样
layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Conv2D(64, kernel_size=(3, 3), activation="relu"),
layers.Flatten(), # 把上一层得到的结果展平成一维向量(3*3*64=576)
layers.Dropout(0.5), # 训练时,每个batch随机选50%的权重固定不更新
layers.Dense(64, activation="relu"),
layers.Dense(10, activation="softmax"),
])
model.summary()
#optimizer = optimizers.SGD(lr=0.5) # 对这个例子,SGD的识别率低一点
optimizer = optimizers.Adam(lr=0.005)
#optimizer = optimizers.RMSprop(lr=0.001)
model.compile(optimizer,loss='categorical_crossentropy', metrics=['accuracy'])
#model.compile(optimizer,loss='mean_squared_logarithmic_error', metrics=['accuracy'])
#model.compile(optimizer,loss='mean_squared_error', metrics=['accuracy'])
# In[3]:训练和测试
#批量大小batch_size 可改为2的幂次方
model.fit(x_train, y_train, batch_size=32, epochs=15)
loss, accuracy = model.evaluate(x_test, y_test)
学习产出:
可以通过调整了batch_size和迭代次数,就能训练出较好的结果;
以上是关于人工智能--Keras卷积神经网络的主要内容,如果未能解决你的问题,请参考以下文章
卷积神经网络结构——LeNet-5(卷积神经网络入门,Keras代码实现)