Keras 图像分类 - 验证数据集的预测准确性与 val_acc 不匹配

Posted

技术标签:

【中文标题】Keras 图像分类 - 验证数据集的预测准确性与 val_acc 不匹配【英文标题】:Keras Image Classification - Prediction accuracy on validation dataset does not match val_acc 【发布时间】:2019-08-28 12:25:32 【问题描述】:

我正在尝试将一组图像分为两个类别leftright

我使用 Keras 构建了一个 CNN,我的分类器似乎运行良好:

我有 1,939 张图像用于训练(50% ,50% ) 我有 648 张图片用于验证(50% left,50% right) 所有图片均为 115x45,灰度 acc 提高到 99.53% val_acc 正在增加到 98.38% lossval_loss 均接近于 0

Keras 冗长对我来说看起来很正常:

60/60 [==============================] - 6s 98ms/step - loss: 0.6295 - acc: 0.6393 - val_loss: 0.4877 - val_acc: 0.7641
Epoch 2/32
60/60 [==============================] - 5s 78ms/step - loss: 0.4825 - acc: 0.7734 - val_loss: 0.3403 - val_acc: 0.8799
Epoch 3/32
60/60 [==============================] - 5s 77ms/step - loss: 0.3258 - acc: 0.8663 - val_loss: 0.2314 - val_acc: 0.9042
Epoch 4/32
60/60 [==============================] - 5s 83ms/step - loss: 0.2498 - acc: 0.8942 - val_loss: 0.2329 - val_acc: 0.9042
Epoch 5/32
60/60 [==============================] - 5s 76ms/step - loss: 0.2408 - acc: 0.9002 - val_loss: 0.1426 - val_acc: 0.9432
Epoch 6/32
60/60 [==============================] - 5s 80ms/step - loss: 0.1968 - acc: 0.9260 - val_loss: 0.1484 - val_acc: 0.9367
Epoch 7/32
60/60 [==============================] - 5s 77ms/step - loss: 0.1621 - acc: 0.9319 - val_loss: 0.1141 - val_acc: 0.9578
Epoch 8/32
60/60 [==============================] - 5s 81ms/step - loss: 0.1600 - acc: 0.9361 - val_loss: 0.1229 - val_acc: 0.9513
Epoch 9/32
60/60 [==============================] - 4s 70ms/step - loss: 0.1358 - acc: 0.9462 - val_loss: 0.0884 - val_acc: 0.9692
Epoch 10/32
60/60 [==============================] - 4s 74ms/step - loss: 0.1193 - acc: 0.9542 - val_loss: 0.1232 - val_acc: 0.9529
Epoch 11/32
60/60 [==============================] - 5s 79ms/step - loss: 0.1075 - acc: 0.9595 - val_loss: 0.0865 - val_acc: 0.9724
Epoch 12/32
60/60 [==============================] - 4s 73ms/step - loss: 0.1209 - acc: 0.9531 - val_loss: 0.1067 - val_acc: 0.9497
Epoch 13/32
60/60 [==============================] - 4s 73ms/step - loss: 0.1135 - acc: 0.9609 - val_loss: 0.0860 - val_acc: 0.9838
Epoch 14/32
60/60 [==============================] - 4s 70ms/step - loss: 0.0869 - acc: 0.9682 - val_loss: 0.0907 - val_acc: 0.9675
Epoch 15/32
60/60 [==============================] - 4s 71ms/step - loss: 0.0960 - acc: 0.9637 - val_loss: 0.0996 - val_acc: 0.9643
Epoch 16/32
60/60 [==============================] - 4s 73ms/step - loss: 0.0951 - acc: 0.9625 - val_loss: 0.1223 - val_acc: 0.9481
Epoch 17/32
60/60 [==============================] - 4s 70ms/step - loss: 0.0685 - acc: 0.9729 - val_loss: 0.1220 - val_acc: 0.9513
Epoch 18/32
60/60 [==============================] - 4s 73ms/step - loss: 0.0791 - acc: 0.9715 - val_loss: 0.0959 - val_acc: 0.9692
Epoch 19/32
60/60 [==============================] - 4s 71ms/step - loss: 0.0595 - acc: 0.9802 - val_loss: 0.0648 - val_acc: 0.9773
Epoch 20/32
60/60 [==============================] - 4s 71ms/step - loss: 0.0486 - acc: 0.9844 - val_loss: 0.0691 - val_acc: 0.9838
Epoch 21/32
60/60 [==============================] - 4s 70ms/step - loss: 0.0499 - acc: 0.9812 - val_loss: 0.1166 - val_acc: 0.9627
Epoch 22/32
60/60 [==============================] - 4s 71ms/step - loss: 0.0481 - acc: 0.9844 - val_loss: 0.0875 - val_acc: 0.9734
Epoch 23/32
60/60 [==============================] - 4s 70ms/step - loss: 0.0533 - acc: 0.9814 - val_loss: 0.1094 - val_acc: 0.9724
Epoch 24/32
60/60 [==============================] - 4s 70ms/step - loss: 0.0487 - acc: 0.9812 - val_loss: 0.0722 - val_acc: 0.9740
Epoch 25/32
60/60 [==============================] - 4s 72ms/step - loss: 0.0441 - acc: 0.9828 - val_loss: 0.0992 - val_acc: 0.9773
Epoch 26/32
60/60 [==============================] - 4s 71ms/step - loss: 0.0667 - acc: 0.9726 - val_loss: 0.0964 - val_acc: 0.9643
Epoch 27/32
60/60 [==============================] - 4s 73ms/step - loss: 0.0436 - acc: 0.9835 - val_loss: 0.0771 - val_acc: 0.9708
Epoch 28/32
60/60 [==============================] - 4s 71ms/step - loss: 0.0322 - acc: 0.9896 - val_loss: 0.0872 - val_acc: 0.9756
Epoch 29/32
60/60 [==============================] - 5s 80ms/step - loss: 0.0294 - acc: 0.9943 - val_loss: 0.1414 - val_acc: 0.9578
Epoch 30/32
60/60 [==============================] - 5s 76ms/step - loss: 0.0348 - acc: 0.9870 - val_loss: 0.1102 - val_acc: 0.9659
Epoch 31/32
60/60 [==============================] - 5s 76ms/step - loss: 0.0306 - acc: 0.9922 - val_loss: 0.0794 - val_acc: 0.9659
Epoch 32/32
60/60 [==============================] - 5s 76ms/step - loss: 0.0152 - acc: 0.9953 - val_loss: 0.1051 - val_acc: 0.9724
Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
conv2d_1 (Conv2D)            (None, 113, 43, 32)       896
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 56, 21, 32)        0
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 54, 19, 32)        9248
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 27, 9, 32)         0
_________________________________________________________________
flatten_1 (Flatten)          (None, 7776)              0
_________________________________________________________________
dense_1 (Dense)              (None, 128)               995456
_________________________________________________________________
dense_2 (Dense)              (None, 1)                 129
=================================================================
Total params: 1,005,729
Trainable params: 1,005,729
Non-trainable params: 0

所以一切看起来都不错,但是当我尝试预测 2,000 个样本的类别时,我得到了非常奇怪的结果,准确度 。

起初我认为这个样本可能有偏差,所以我尝试预测验证数据集中的图像。

我应该有 98.38% 的准确率,以及完美的 50-50 分割,但相反,我再次得到:

170 幅图像预测正确,而不是 324 幅,准确率为 98.8% 预测剩下 478 张图片,而不是 324 张,准确率为 67.3% 平均准确率:75.69% 而不是 98.38%

我猜我的 CNN 或预测脚本有问题。

CNN 分类器代码:

from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True

# Init CNN
classifier = Sequential()

# Step 1 - Convolution
classifier.add(Conv2D(32, (3, 3), input_shape = (115, 45, 3), activation = 'relu'))

# Step 2 - Pooling
classifier.add(MaxPooling2D(pool_size = (2, 2)))

# Adding a second convolutional layer
classifier.add(Conv2D(32, (3, 3), activation = 'relu'))
classifier.add(MaxPooling2D(pool_size = (2, 2)))

# Step 3 - Flattening
classifier.add(Flatten())

# Step 4 - Full connection
classifier.add(Dense(units = 128, activation = 'relu'))
classifier.add(Dense(units = 1, activation = 'sigmoid'))

# Compiling the CNN
classifier.compile(optimizer = 'adam', loss = 'binary_crossentropy', metrics = ['accuracy'])

# Part 2 - Fitting the CNN to the images
from keras.preprocessing.image import ImageDataGenerator
import numpy

train_datagen = ImageDataGenerator(rescale = 1./255, shear_range = 0.2, zoom_range = 0.2, horizontal_flip = False)
test_datagen = ImageDataGenerator(rescale = 1./255)

training_set = train_datagen.flow_from_directory('./dataset/training_set',
                                                 target_size = (115, 45),
                                                 batch_size = 32,
                                                 class_mode = 'binary')

test_set = test_datagen.flow_from_directory('./dataset/test_set',
                                            target_size = (115, 45),
                                            batch_size = 32,
                                            class_mode = 'binary')

classifier.fit_generator(training_set,
                         steps_per_epoch = 1939/32, # total samples / batch size
                         epochs = 32,
                         validation_data = test_set,
                         validation_steps = 648/32)

# Save the classifier
classifier.evaluate_generator(generator=test_set)
classifier.summary()
classifier.save('./classifier.h5')

预测码:

from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Flatten
from keras.layers import Dense
from keras.models import load_model
from keras.preprocessing.image import ImageDataGenerator
import os
import numpy as np
from keras.preprocessing import image
from shutil import copyfile

classifier = load_model('./classifier.h5')
folder = './small/'
files = os.listdir(folder)
pleft = 0
pright = 0
for f in files:
    test_image = image.load_img(folder+f, target_size = (115, 45))
    test_image = image.img_to_array(test_image)
    test_image = np.expand_dims(test_image, axis = 0)
    result = classifier.predict(test_image)
        #print training_set.class_indices
    if result[0][0] == 1:
        pright=pright+1
        prediction = 'right'
        copyfile(folder+'../'+f, '/found_right/'+f)
    else:
        prediction = 'left'
        copyfile(folder+'../'+f, '/found_left/'+f)
        pleft=pleft+1

ptot = pleft + pright
print 'Left = '+str(pleft)+' ('+str(pleft / (ptot / 100))+'%)'
print 'Right = '+str(pright)
print 'Total = '+str(ptot)

输出:

Left = 478 (79%)
Right = 170
Total = 648

您的帮助将不胜感激。

【问题讨论】:

我没有用生成器做很多训练,但是你在第一个 epoch 的准确率非常高,每批只有 60 张图像,我最好的猜测是它在一个批次上过度拟合- 逐批。但我对此并不太自信。尝试使用更大的批次运行? 您没有正确使用模型预测,将模型的输出与 1 进行比较没有任何意义,您需要使用可调阈值(keras 使用 0.5 来计算准确度)。跨度> @MatiasValdenegro,我仔细检查过,在我的情况下,Keras 的输出总是 [[1.]] 或 [[0.]],我读到这可能与使用的激活函数有关。 无论您观察到什么,Keras 都会通过将输出值设为 0.5 来计算准确度。如果您打印这些值,它们可能会被四舍五入,这就是您看到 1.0 而不是 0.99999 的原因。比较浮点数是否相等也是一种不好的做法。 另外,您在预测时不会重新缩放(除以 255)图像值,而生成器正在为您执行此操作。 【参考方案1】:

我通过做两件事解决了这个问题:

    正如@Matias Valdenegro 建议的那样,我必须在预测之前重新缩放图像值,我在调用 predict() 之前添加了 test_image /= 255。

    由于我的 val_loss 仍然有点高,我在 Dense 层之前添加了一个 EarlyStopping 回调 以及两个 Dropout()

我的预测结果现在与训练/验证期间获得的结果一致。

【讨论】:

以上是关于Keras 图像分类 - 验证数据集的预测准确性与 val_acc 不匹配的主要内容,如果未能解决你的问题,请参考以下文章

Keras 多类图像分类和预测

Keras 函数模型验证准确率高,但预测不正确

预测取决于 Keras 中的批量大小

Keras LSTM:如何预测超越验证与预测?

Keras,比分对预测值

Keras Resnet-50 图像分类过拟合