Coursera TensorFlow 基础课程-week4
Posted fansy1990
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Coursera TensorFlow 基础课程-week4相关的知识,希望对你有一定的参考价值。
Using Real-world Images
参考:Ubuntu 16 安装TensorFlow及Jupyter notebook 安装TensorFlow。
本篇博客翻译来自 Introduction to TensorFlow for Artificial Intelligence, Machine Learning, and Deep Learning
仅供学习、交流等非盈利性质使用!!!
其他相关文章
Coursera TensorFlow 基础课程-week4
Coursera TensorFlow 基础课程-week3
Coursera TensorFlow 基础课程-week2
Coursera TensorFlow 基础课程-week1
文章目录
1. 科技造福人类 & 引入
TensorFlow这项新科技如何造福人类?
从上面的视频可以看出,一项新技术(比如TensorFlow)确实是可以解决人们面对的实际问题的。
那么现实生活中的问题,使用TensorFlow怎么解决呢?
- 现实生活中,图片肯定不可能全部是28*28像素的(之前处理的图片都是28*28的);
- 图片的label也不会整理好给到我们;
- 数据也不会直接分为训练集和测试集提供给我们;
所以,就需要了解一些数据的预处理工作,这样才能真正的把TensorFlow用起来!
2. ImageGenerator介绍
TensorFlow中有一个工具,ImageGenerator,它可以帮我们把数据进行分割,同时进行label化。
如下图所示:
如果我们建立一个images目录,然后在其下建立两个目录,Training 、Validation,接着,在Training和Validation目录下面,再分别建立Horse、Human目录,并把图片放入其下,那么使用ImageGenerator就可以自动的把对应目录的数据分别加载为训练集和测试集,并且标签也会根据目录的不同,为每个图片配上一个对应的标签。
如果要使用ImageGenerator,那么需要导入,使用如下代码:
from tensorflow.keras.preprocessing.image import ImageDataGenerator
加载数据使用下面的代码:
train_datagen = ImageDataGenerator(rescale= 1./255)
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size =(300,300),
batch_size = 128,
class_model='binary'
)
test_datagen = ImageDataGenerator(rescale= 1./255)
test_generator = test_datagen.flow_from_directory(
test_dir,
target_size =(300,300),
batch_size = 32,
class_model='binary'
)
代码解释如下:
- ImageDataGenerator可以指定参数rescale,指定把像素点进行归一化(如代码所示,则是把所有像素点都除以255.0);
- train_dir :指的是训练目录,如上,则指的是images/Training,test_dir则类似,指的是images/Validation
- target_size指的是把图像处理后的大小,比如原始图像是1200*1200,那么指定这个参数后,其输出就是300*300 (即图片的缩放,因为输入图片可能不是一样的大小,但是TensorFlow的模型却需要输入都是一样的大小,所以这个参数很有用);
- batch_size,则是指一次操作的图片个数,调整这个值可以获得不同的执行性能(效率);
- class_mode,这里特指是二分类,所以使用的是’binary’;
3. 为识别human 和 horse构建模型
本应用实例是应用TensorFlow来针对提供的图片构建模型,进而可以识别真实生活中的人和马。
为问题定义一个网络模型,代码如下:
model = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16,(3,3),activation='relu',input_shape=(300,300,3)),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(32,(3,3),activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Conv2D(64,(3,3),activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(512,activation='relu'),
tf.keras.layers.Dense(1,activation='sigmoid')
])
该代码解释如下:
- 最开始定义了3层卷积、池化层;
- 接着定义了一次Flatten、一层Dense,以及最后的Dense;
- 第一层input_shape为(300,300,3),之前的代码都是类似(300,300),而没有后面的3。后面的3指的是颜色3个比特,分别代表,蓝色、红色、黄色。
- 最后的一层Dense只有一个神经元,但是我们是一个二分类,是因为sigmoid函数,就是使用于二分类的,其效率会比之前的设计效率高(之前使用10个神经元来代表10个类别,当然这里你也可以使用2个神经元来设计)。
这个模型进行summary,可以得到如下输出:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 298, 298, 16) 448
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 149, 149, 16) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 147, 147, 32) 4640
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 73, 73, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 71, 71, 64) 18496
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 35, 35, 64) 0
_________________________________________________________________
flatten (Flatten) (None, 78400) 0
_________________________________________________________________
dense (Dense) (None, 512) 40141312
_________________________________________________________________
dense_1 (Dense) (None, 1) 513
=================================================================
Total params: 40,165,409
Trainable params: 40,165,409
Non-trainable params: 0
_________________________________________________________________
这个输出,就和前面解释一样了,这里只做个简单解释:
- 第一层卷积层filter是3*3,所以输出像素会少2个(原来是300,现在变为298);
- 第一层池化层,矩阵是2*2,所以输出像素少一半(即从298变为149);
- 其他以此类推;
接着,就可以编译模型,其代码如下:
from tensorflow.keras.optimizers import RMSprop
model.compile(loss='binary_crossentropy',
optimizer = RMSprop(lr=0.001),
metrics=['acc']
)
在这个代码中,损失函数使用二分类的交叉熵函数,优化器使用RMS(此优化器可以调整学习速率-learning rate),打印正确率。
接着就可以训练模型,如下:
history = model.fit_generator(
train_generator,
steps_per_epoch=8,
epochs=15,
validation_data = validation_generator,
validation_steps = 8,
verbose=2
)
代码解释如下:
- 之前代码用fit来进行建模,现在使用fit_generator,是因为现在的输入数据是使用ImageDataGenerator来进行生成的;
- train_generator,就是之前设置的从输入目录读取的变量;
- 一共有1024个图片,之前设置每次加载128个图片(batch_size),所以需要加载8次才能加载完全部图片,所以使用steps_per_epoch=8;
- epochs 设置为15,那么训练15次,当然可以调整;
- validation_data设置为之前生成的validation_generator;
- 测试数据有256个,之前设置batch_size=32,所以这里使用validation_steps为8就可以加载所有图片了。
- verbose参数设置输出的信息,设置为2,则不会输出建模的过程信息。
4. 识别human和horse实战
1. 下载并解压数据
下载数据,在Linux中使用如下命令来下载数据(或在这下载:链接:https://pan.baidu.com/s/1NTUQkQyy1P4nkEcUa8wNRA 提取码:fsfc
):
wget https://storage.googleapis.com/laurencemoroney-blog.appspot.com/horse-or-human.zip
下载后,把其移动到/opt/data/human_horse 目录,然后使用命令进行解压,并查看其输出:
unzip horse-or-human.zip
#ls
horse-or-human.zip horses humans
可以看到有两个目录,horses和humans目录。
2. 数据预览
- 定义数据目录:
import os
# horse pictures目录
train_horse_dir = os.path.join('/opt/data/horse_human/horses')
# human pictures目录
train_human_dir = os.path.join('/opt/data/horse_human/humans')
- 查看部分数据
train_horse_names = os.listdir(train_horse_dir)
print(train_horse_names[:10])
train_human_names = os.listdir(train_human_dir)
print(train_human_names[:10])
其输出为:
['horse33-5.png', 'horse03-9.png', 'horse20-3.png', 'horse44-9.png', 'horse04-7.png', 'horse42-1.png', 'horse06-8.png', 'horse48-8.png', 'horse39-0.png', 'horse07-3.png']
['human12-25.png', 'human15-08.png', 'human02-10.png', 'human13-26.png', 'human02-29.png', 'human14-09.png', 'human14-29.png', 'human11-02.png', 'human09-01.png', 'human16-10.png']
可以看到horse和human的图片各10个,同时通过文件名也可以知道该图片的label。
3. 查看数据总个数
print('total training horse images:', len(os.listdir(train_horse_dir)))
print('total training human images:', len(os.listdir(train_human_dir)))
输出为:
total training horse images: 500
total training human images: 527
可以看出,马和人的图片个数基本是一半一半的。
4. 展示部分图片
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
# Parameters for our graph; we'll output images in a 4x4 configuration
nrows = 4
ncols = 4
# Index for iterating over images
pic_index = 0
# Set up matplotlib fig, and size it to fit 4x4 pics
fig = plt.gcf()
fig.set_size_inches(ncols * 4, nrows * 4)
pic_index += 8
next_horse_pix = [os.path.join(train_horse_dir, fname)
for fname in train_horse_names[pic_index-8:pic_index]]
next_human_pix = [os.path.join(train_human_dir, fname)
for fname in train_human_names[pic_index-8:pic_index]]
for i, img_path in enumerate(next_horse_pix+next_human_pix):
# Set up subplot; subplot indices start at 1
sp = plt.subplot(nrows, ncols, i + 1)
sp.axis('Off') # Don't show axes (or gridlines)
img = mpimg.imread(img_path)
plt.imshow(img)
plt.show()
执行上述代码,可以得到如下图片:
从图片可以分别看到8个horse和8个human的图片。
3.构建模型
- 构建模型
model = tf.keras.models.Sequential([
# 输入直接使用其原始图片大小 300*300
# This is the first convolution
tf.keras.layers.Conv2D(16, (3,3), activation='relu', input_shape=(300, 300, 3)),
tf.keras.layers.MaxPooling2D(2, 2),
# The second convolution
tf.keras.layers.Conv2D(32, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The third convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fourth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# The fifth convolution
tf.keras.layers.Conv2D(64, (3,3), activation='relu'),
tf.keras.layers.MaxPooling2D(2,2),
# Flatten the results to feed into a DNN
tf.keras.layers.Flatten(),
# 512 neuron hidden layer
tf.keras.layers.Dense(512, activation='relu'),
# Only 1 output neuron. It will contain a value from 0-1 where 0 for 1 class ('horses') and 1 for the other ('humans')
tf.keras.layers.Dense(1, activation='sigmoid')
])
注意,此处的input_size需要和ImageDataGenerator对应的地方设置其target_size为对应的值。
2. 编译模型
from tensorflow.keras.optimizers import RMSprop
model.compile(loss='binary_crossentropy',
optimizer=RMSprop(lr=0.001),
metrics=['acc'])
- 使用ImageDataGenerator构建输入
from tensorflow.keras.preprocessing.image import ImageDataGenerator
# All images will be rescaled by 1./255
train_datagen = ImageDataGenerator(rescale=1/255)
# Flow training images in batches of 128 using train_datagen generator
train_generator = train_datagen.flow_from_directory(
'/opt/data/horse_human/', # This is the source directory for training images
target_size=(300, 300), # All images will be resized to 300x300
batch_size=128,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
注意,这里target_size设置为300x300,输出为:
Found 1027 images belonging to 2 classes.
- 训练模型
history = model.fit_generator(
train_generator,
steps_per_epoch=8,
epochs=15,
verbose=1)
pip3 install pillow , 如果有相关报错,需要安装这个包;
此处设置verbose为1,则会把建模完整信息进行输出,如下(虚拟机6G3核可运行,较慢):
Epoch 1/15
9/9 [==============================] - 412s 46s/step - loss: 0.7084 - acc: 0.5696
Epoch 2/15
9/9 [==============================] - 535s 59s/step - loss: 0.7116 - acc: 0.6972
Epoch 3/15
9/9 [==============================] - 385s 43s/step - loss: 0.7930 - acc: 0.7546
Epoch 4/15
9/9 [==============================] - 427s 47s/step - loss: 0.4925 - acc: 0.8169
Epoch 5/15
9/9 [==============================] - 596s 66s/step - loss: 0.2293 - acc: 0.9241
Epoch 6/15
9/9 [==============================] - 366s 41s/step - loss: 0.1235 - acc: 0.9426
Epoch 7/15
9/9 [==============================] - 290s 32s/step - loss: 0.1182 - acc: 0.9513
Epoch 8/15
9/9 [==============================] - 242s 27s/step - loss: 0.2449 - acc: 0.9338
Epoch 9/15
9/9 [==============================] - 273s 30s/step - loss: 0.0659 - acc: 0.9727
Epoch 10/15
9/9 [==============================] - 368s 41s/step - loss: 0.0964 - acc: 0.9649
Epoch 11/15
9/9 [==============================] - 353s 39s/step - loss: 0.0674 - acc: 0.9834
Epoch 12/15
9/9 [==============================] - 428s 48s/step - loss: 0.0714 - acc: 0.9796
Epoch 13/15
9/9 [==============================] - 378s 42s/step - loss: 0.0701 - acc: 0.9776
Epoch 14/15
9/9 [==============================] - 398s 44s/step - loss: 0.1257 - acc: 0.9581
Epoch 15/15
9/9 [==============================] - 373s 41s/step - loss: 0.0505 - acc: 0.9776
可以看到,其输出正确率可以达到98%左右。
5. 使用真实图片来进行预测
从网上下载图片(这四张图片,也可以从这下载),如下:
从图片中可以看到有四副真实图片,包含两个human和两个horse,并且其分辨率是不一样的。
下面使用刚才咱们构建的模型来进行预测(注意把下载的图片放到/opt/data/new_human_horse目录中):
import numpy as np
from tensorflow.keras.preprocessing import image
new_files = os.listdir('/opt/data/new_human_horse')
#print(new_files)
for fn in new_files:
# predicting images
path = '/opt/data/new_human_horse/' + fn
img = image.load_img(path, target_size=(150, 150))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
images = np.vstack([x])
classes = model.predict(images, batch_size=10)
print(classes[0])
if classes[0]>0.5:
print(fn + " is a human")
else:
print(fn + " is a horse")
其输出为:
[0.]
human_2d13d9403be7421981a325ff383a0f4a.jpg is a horse
[0.]
human-4320806490398999875.jpg is a horse
[0.]
horse-image-placeholder-title.jpg is a horse
[0.]
horse-th.jpg is a horse
从图片可以看出,关于human的图片分类错误了。(当然,原视频中选择的图片是都分类正确了,可能和我选的图片有关???!!!)
- 随机选择一个图片,查看其每层可视化输出
import numpy as np
import random
from tensorflow.keras.preprocessing.image import img_to_array, load_img
# Let's define a new Model that will take an image as input, and will output
# intermediate representations for all layers in the previous model after
# the first.
successive_outputs = [layer.output for layer in model.layers[1:]]
#visualization_model = Model(img_input, successive_outputs)
visualization_model = tf.keras.models.Model(inputs = model.input, outputs = successive_outputs)
# Let's prepare a random input image from the training set.
horse_img_files = [os.path.join(train_horse_dir, f) for f in train_horse_names]
human_img_files = [os.path.join(train_human_dir, f) for f in train_human_names]
img_path = random.choice(horse_img_files + human_img_files)
img = load_img(img_path, target_size=(300, 300)) # this is a PIL image
x = img_to_array(img) # Numpy array with shape (300, 300, 3)
x = x.reshape((1,) + x.shape) # Numpy array with shape (1, 300, 300, 3)
# Rescale by 1/255
x /= 255
# Let's run our image through our network, thus obtaining all
# intermediate representations for this image.
successive_feature_maps = visualization_model.predict(x)
# These are the names of the layers, so can have them as part of our plot
layer_names = [layer.name for layer in model.layers]
# Now let's display our representations
for layer_name, feature_map in zip(layer_names, successive_feature_maps):
if len(feature_map.shape) == 4:
# Just do this for the conv / maxpool layers, not the fully-connected layers
n_features = feature_map.shape[-1] # number of features in feature map
# The feature map has shape (1, size, size, n_features)
size = feature_map.shape[1]
# We will tile our images in this matrix
display_grid = np.zeros((size, size * n_features))
for i in range(n_features):
# Postprocess the feature to make it visually palatable
x = feature_map[0, :, :, i]
x -= x.mean()
x /= x.std()
x *= 64
x += 128
x = np.clip(x, 0, 255).astype('uint8')
# We'll tile each filter into this big horizontal grid
display_grid[:, i * size : (i + 1) * size] = x
# Display the grid
scale = 20. / n_features
plt.figure(figsize=(scale * n_features, scale))
plt.title(layer_name)
plt.grid(False)
plt.imshow(display_grid, aspect='auto', cmap='viridis')
其输出为:
从上往下看,可以认为是把一个图片进行转换,只提取其和目标特征(此处就是是一个人还是一个马)相关的特征。(原文说的是一个蒸馏流程,保留其精华)
- 添加测试:
首先,在这下载测试数据,或使用下面的命令下载测试数据:
wget https://storage.googleapis.com/laurencemoroney-blog.appspot.com/validation-horse-or-human.zip
接着,把其放入目录/opt/data/horse_human_validation:
# Directory with our training horse pictures
validation_horse_dir = os.path.join('/opt/data/horse_human_validation/horses')
# Directory with our training human pictures
validation_human_dir = os.path.join('/opt/data/horse_human_validation/humans')
接着,构造测试数据的ImageDataGenerator ,如下:
validation_datagen = ImageDataGenerator(rescale=1/255)
# Flow training images in batches of 128 using train_datagen generator
validation_generator = validation_datagen.flow_from_directory(
'/opt/data/horse_human_validation/', # This is the source directory for training images
target_size=(300, 300), # All images will be resized to 150x150
batch_size=32,
# Since we use binary_crossentropy loss, we need binary labels
class_mode='binary')
重新进行model.fit_generator ,指定validation参数,如下:
history = model.fit_generator(
train_generator,
steps_per_epoch=8,
epochs=15,
verbose=1,
validation_data = validation_generator,
validation_steps=8)
其输出为:
Epoch 1/15
8/8 [==============================] - 22s 3s/step - loss: 0.7348 - acc: 0.5000
9/9 [==============================] - 362s 40s/step - loss: 0.9071 - acc: 0.5307 - val_loss: 0.7348 - val_acc: 0.5000
Epoch 2/15
8/8 [==============================] - 22s 3s/step - loss: 0.7880 - acc: 0.5078
9/9 [==============================] - 285s 32s/step - loss: 0.8083 - acc: 0.5735 - val_loss: 0.7880 - val_acc: 0.5078
Epoch 3/15
8/8 [==============================] - 40s 5s/step - loss: 0.6607 - acc: 0.5938
9/9 [==============================] - 297s 33s/step - loss: 1.0091 - acc: 0.7322 - val_loss: 0.6607 - val_acc: 0.5938
Epoch 4/15
8/8 [==============================] - 37s 5s/step - loss: 0.4939 - acc: 0.7617
9/9 [==============================] - 283s 31s/step - loss: 0.6453 - acc: 0.6699 - val_loss: 0.4939 - val_acc: 0.7617
Epoch 5/15
8/8 [==============================] - 24s 3s/step - loss: 1.1673 - acc: 0.7695
9/9 [==============================] - 289s 32s/step - loss: 0.3698 - acc: 0.8763 - val_loss: 1.1673 - val_acc: 0.7695
Epoch 6/15
8/8 [==============================] - 23s 3s/step - loss: 0.7771 - acc: 0.7930
9/9 [==============================] - 273s 30s/step - loss: 0.2960 - acc: 0.8627 - val_loss: 0.7771 - val_acc: 0.7930
Epoch 7/15
8/8 [==============================] - 25s 3s/step - loss: 0.5656 - acc: 0.8906
9/9 [==============================] - 284s 32s/step - loss: 0.1102 - acc: 0.9542 - val_loss: 0.5656 - val_acc: 0.8906
Epoch 8/15
8/8 [==============================] - 26s 3s/step - loss: 2.3894 - acc: 0.7031
9/9 [==============================] - 289s 32s/step - loss: 0.3854 - acc: 0.8987 - val_loss: 2.3894 - val_acc: 0.7031
Epoch 9/15
8/8 [==============================] - 36s 5s/step - loss: 2.4947 - acc: 0.7031
9/9 [==============================] - 301s 33s/step - loss: 0.1747 - acc: 0.9445 - val_loss: 2.4947 - val_acc: 0.7031
Epoch 10/15
8/8 [==============================] - 34s 4s/step - loss: 0.2772 - acc: 0.9219
9/9 [==============================] - 471s 52s/step - loss: 0.0700 - acc: 0.9679 - val_loss: 0.2772 - val_acc: 0.9219
Epoch 11/15
8/8 [==============================] - 32s 4s/step - loss: 1.1290 - acc: 0.8008
9/9 [==============================] - 373s 41s/step - loss: 0.8948 - acc: 0.8705 - val_loss: 1.1290 - val_acc: 0.8008
Epoch 12/15
8/8 [==============================] - 43s 5s/step - loss: 1.0806 - acc: 0.8594
9/9 [==============================] - 608s 68s/step - loss: 0.0565 - acc: 0.9834 - val_loss: 1.0806 - val_acc: 0.8594
Epoch 13/15
8/8 [==============================] - 44s 5s/step - loss: 1.1165 - acc: 0.8398
9/9 [==============================] - 552s 61s/step - loss: 0.0357 - acc: 0.9864 - val_loss: 1.1165 - val_acc: 0.8398
Epoch 14/15
8/8 [==============================] - 35s 4s/step - loss: 1.7010 - acc: 0.7617
9/9 [==============================] - 647s 72s/step - loss: 0.0177 - acc: 0.9951 - val_loss: 1.7010 - val_acc: 0.7617
Epoch 15/15
8/8 [==============================] - 35s 4s/step - loss: 1.1240 - acc: 0.8555
9/9 [==============================] - 503s 56s/step - loss: 0.0318 - acc: 0.9854 - val_loss: 1.1240 - val_acc: 0.8555
从上面的结果可以看出,模型对训练集的效果较好(正确率达到98.5%),对于没有应用到训练过程的测试集,效果也还可以(正确率达到85.5%)。
再次对新数据进行分类,结果如下:
[1.]
human_2d13d9403be7421981a325ff383a0f4a.jpg is a human
[0.]
human-4320806490398999875.jpg is a horse
[0.]
horse-image-placeholder-title.jpg is a horse
[0.]
horse-th.jpg is a horse
发现只有一个human被分错了,说明精度有所提升;
- 压缩图片:
上面在测试的时候,使用target_size为300 ,如果把其改为150会如何呢?
把相关模块改为150后,再次训练模型,输出为:
Epoch 1/15
8/8 [==============================] - 8s 963ms/step - loss: 0.6753 - acc: 0.5000
9/9 [==============================] - 76s 8s/step - loss: 0.7239 - acc: 0.5278 - val_loss: 0.6753 - val_acc: 0.5000
Epoch 2/15
8/8 [==============================] - 9s 1s/step - loss: 0.4213 - acc: 0.8438
9/9 [==============================] - 73s 8s/step - loss: 0.7311 - acc: 0.7050 - val_loss: 0.4213 - val_acc: 0.8438
Epoch 3/15
8/8 [==============================] - 9s 1s/step - loss: 1.0088 - acc: 0.6172
9/9 [==============================] - 68s 8s/step - loss: 0.5170 - acc: 0.8169 - val_loss: 1.0088 - val_acc: 0.6172
Epoch 4/15
8/8 [==============================] - 8s 980ms/step - loss: 0.3365 - acc: 0.8828
9/9 [==============================] - 69s 8s/step - loss: 0.5933 - acc: 0.7897 - val_loss: 0.3365 - val_acc: 0.8828
Epoch 5/15
8/8 [==============================] - 8s 976ms/step - loss: 0.4389 - acc: 0.8516
9/9 [==============================] - 72s 8s/step - loss: 0.2726 - acc: 0.8987 - val_loss: 0.4389 - val_acc: 0.8516
Epoch 6/15
8/8 [==============================] - 8s 943ms/step - loss: 1.1277 - acc: 0.8008
9/9 [==============================] - 71s 8s/step - loss: 0.1266 - acc: 0.9494 - val_loss: 1.1277 - val_acc: 0.8008
Epoch 7/15
8/8 [==============================] - 9s 1s/step - loss: 1.9571 - acc: 0.6953
9/9 [==============================] - 72s 8s/step - loss: 0.1502 - acc: 0.9464 - val_loss: 1.9571 - val_acc: 0.6953
Epoch 8/15
8/8 [==============================] - 11s 1s/step - loss: 0.7124 - acc: 0.8359
9/9 [==============================] - 80s 9s/step - loss: 0.3604 - acc: 0.8763 - val_loss: 0.7124 - val_acc: 0.8359
Epoch 9/15
8/8 [==============================] - 14s 2s/step - loss: 0.6322 - acc: 0.8320
9/9 [==============================] - 85s 9s/step - loss: 0.1740 - acc: 0.9416 - val_loss: 0.6322 - val_acc: 0.8320
Epoch 10/15
8/8 [==============================] - 8s 940ms/step - loss: 0.6428 - acc: 0.8242
9/9 [==============================] - 78s 9s/step - loss: 0.1222 - acc: 0.9640 - val_loss: 0.6428 - val_acc: 0.8242
Epoch 11/15
8/8 [==============================] - 11s 1s/step - loss: 0.8398 - acc: 0.8516
9/9 [==============================] - 72s 8s/step - loss: 0.0538 - acc: 0.9844 - val_loss: 0.8398 - val_acc: 0.8516
Epoch 12/15
8/8 [==============================] - 9s 1s/step - loss: 0.4072 - acc: 0.8242
9/9 [==============================] - 72s 8s/step - loss: 0.5111 - acc: 0.8802 - val_loss: 0.4072 - val_acc: 0.8242
Epoch 13/15
8/8 [==============================] - 8s 996ms/step - loss: 0.8312 - acc: 0.8438
9/9 [==============================] - 72s 8s/step - loss: 0.1396 - acc: 0.9426 - val_loss: 0.8312 - val_acc: 0.8438
Epoch 14/15
8/8 [==============================] - 10s 1s/step - loss: 0.8713 - acc: 0.8477
9/9 [==============================] - 72s 8s/step - loss: 0.1203 - acc: 0.9552 - val_loss: 0.8713 - val_acc: 0.8477
Epoch 15/15
8/8 [==============================] - 8s 1s/step - loss: 1.0197 - acc: 0.8516
9/9 [==============================] - 75s 8s/step - loss: 0.0227 - acc: 0.9942 - val_loss: 1.0197 - val_acc: 0.8516
同时使用新模型来对新数据进行验证,可以得到结果为:
[1.]
human_2d13d9403be7421981a325ff383a0f4a.jpg is a human
[0.]
human-4320806490398999875.jpg is a horse
[0.]
horse-image-placeholder-title.jpg is a horse
[0.]
horse-th.jpg is a horse
通过对比,发现:
a. 把target_size改为150后,训练过程明显加快;
b. 改为150后,模型精度有所下降,对新数据的验证也有所下降;
5.测试
1.Using Image Generator, how do you label images?
- a. It’s based on the directory the image is contained in
- b. You have to manually do it
- c. It’s based on the file name
- d. TensorFlow figures it out from the contents
What method on the Image Generator is used to normalize the image?
- a. rescale
- b. normalize
- c. Rescale_image
- d. normalize_image
How did we specify the training size for the images?
- a. The training_size parameter on the validation generator
- b. The target_size parameter on the training generator
- c. The target_size parameter on the validation generator
- d. The training_size parameter on the training generator
When we specify the input_shape to be (300, 300, 3), what does that mean?
- a. There will be 300 images, each size 300, loaded in batches of 3
- b. There will be 300 horses and 300 humans, loaded in batches of 3
- c. Every Image will be 300x300 pixels, and there should be 3 Convolutional Layers
- d. Every Image will be 300x300 pixels, with 3 bytes to define color
If your training data is close to 1.000 accuracy, but your validation data isn’t, what’s the risk here?
- a. You’re overfitting on your training data
- b. You’re overfitting on your validation data
- c. No risk, that’s a great result
- d. You’re underfitting on your validation data
Convolutional Neural Networks are better for classifying images like horses and humans because:
- a. In these images, the features may be in different parts of the frame
- b. There’s a wide variety of horses
- c. There’s a wide variety of humans
- d. All of the above
After reducing the size of the images, the training results were different. Why?
- a. There was more condensed information in the images
- b. There was less information in the images
- c. The training was faster
- d. We removed some convolutions to handle the smaller images
My guess:
- a
- a
- b
- d
- a
- d
- b
6.最后的题目
根据提供的数据,包含80个图片,其中40个笑脸,40个哭脸,需要你训练一个模型,需要精确率达到99.9%+。
提示:3层卷积层效果最好!
提示代码如下:
注意,先下载数据 或使用命令下载:
wget https://storage.googleapis.com/laurencemoroney-blog.appspot.com/happy-or-sad.zip
下载完成后,解压,并移动到/opt/data/happy_sad 目录
import tensorflow as tf
import os
DESIRED_ACCURACY = 0.999
class myCallback(# Your Code):
# Your Code
callbacks = myCallback()
# This Code Block should Define and Compile the Model
model = tf.keras.models.Sequential([
# Your Code Here
])
from tensorflow.keras.optimizers import RMSprop
model.compile(# Your Code Here #)
# This code block should create an instance of an ImageDataGenerator called train_datagen
# And a train_generator by calling train_datagen.flow_from_directory
from tensorflow.keras.preprocessing.image import ImageDataGenerator
train_datagen = # Your Code Here
train_generator = train_datagen.flow_from_directory(
# Your Code Here)
# Expected output: 'Found 80 images belonging to 2 classes'
# This code block should call model.fit_generator and train for
# a number of epochs.
history = model.fit_generator(
# Your Code Here)
# Expected output: "Reached 99.9% accuracy so cancelling training!""
Answer Download Here
Code Download Here:
First
以上是关于Coursera TensorFlow 基础课程-week4的主要内容,如果未能解决你的问题,请参考以下文章
Coursera TensorFlow 基础课程-week3
Coursera TensorFlow 基础课程-week3
Coursera TensorFlow 基础课程-week3
Coursera TensorFlow 基础课程-week2