Keras 图像分类:检查输入时出错:预期 input_1 有 4 个维度,但得到了形状为 (6885、7500) 的数组

Posted

技术标签:

【中文标题】Keras 图像分类:检查输入时出错:预期 input_1 有 4 个维度,但得到了形状为 (6885、7500) 的数组【英文标题】:Keras Image Classification: Error when checking input: expected input_1 to have 4 dimensions, but got array with shape (6885, 7500) 【发布时间】:2020-06-25 17:21:54 【问题描述】:

我看到其他帖子说只是按预期添加额外的尺寸。首先,我不知道具体该怎么做,但最重要的是,我想知道为什么我的尺寸会发生变化,这样我就可以自己为未来的模型解决这个问题。

注意,我仅限于使用 MLP 进行训练。这意味着只有全连接层。不允许使用卷积层或反馈(LSTM 或任何 RNN 架构)。不允许使用预训练模型,例如(resnet、densenet、...)。我可以在层之间使用其他操作,例如 Dropout、Batch Normalization 或其他类型的层输入/输出操作。我希望我必须提供我的整个代码才能获得我需要的帮助。请原谅我的代码中的所有 cmets,我提醒我一切都做了什么。我知道我需要数据增强,但首先需要它。

# **********************************************************************************************************************
# ----------------------------------------------------------------------------------------------------------------------
# Goal: Use MLP to classify blood cells into 4 different categories:
#                 1. red blood cell
#                 2. ring         - malaria
#                 3. schizont     - malaria
#                 4. trophozoite  - malaria
# Metric used: macro-averaged F1-score and
# Cohen's Kappa score - mean of these two scores
# **********************************************************************************************************************
#                                              import Packages
# **********************************************************************************************************************
import os
import cv2
from keras.layers import Dense, Input
from sklearn.preprocessing import LabelEncoder
# https://machinelearningmastery.com/how-to-normalize-center-and-standardize-images-with-the-imagedatagenerator-in-keras/
import random
import numpy as np
import tensorflow as tf
from keras.callbacks import ModelCheckpoint
from sklearn.model_selection import train_test_split
from sklearn.metrics import cohen_kappa_score, f1_score
import matplotlib.pyplot as plt
from keras.models import Model
from keras import optimizers
from keras.utils import to_categorical

# *************************************************************
# %% --------------------------------------- Set-Up --------------------------------------------------------------------

print('Set-up')
# np.load("x_train.npy"), np.load("y_train.npy")
# np.load("x_train.npy"), np.load("y_train.npy")
SEED = 42
os.environ['PYTHONHASHSEED'] = str(SEED)
random.seed(SEED)
np.random.seed(SEED)
tf.random.set_seed(SEED)
# **********************************************************************************************************************
#                                               LOAD DATA
# **********************************************************************************************************************
print('Load Data')
if "train" not in os.listdir():
    os.system("wget https://storage.googleapis.com/exam-deep-learning/train.zip")
    os.system("unzip train.zip")

DATA_DIR = os.getcwd() + "/train/"
RESIZE_TO = 50  # 50 pixels

x, y = [], []
for path in [f for f in os.listdir(DATA_DIR) if f[-4:] == ".png"]:  # for .png images in directory
    x.append(cv2.resize(cv2.imread(DATA_DIR + path), (RESIZE_TO, RESIZE_TO)))  # resizes image to dimensions
    with open(DATA_DIR + path[:-4] + ".txt", "r") as s:  # opens a .txt file of same name as image
        label = s.read()  # reads the file's contents
    y.append(label)  # appends the contents to y data set
x, y = np.array(x), np.array(y)  # sets x and y to an array
le = LabelEncoder()  # label encoder  encodes target labels with values between 0 and n_classes-1, done to y target
le.fit(["red blood cell", "ring", "schizont", "trophozoite"])  # should have 4 values; fit label encoder
y = le.transform(y)  # transforms labels to normalized encoding
print(x.shape, y.shape)
# (8607, 50, 50, 3) (8607,)
print(x)
print(y)

# **********************************************************************************************************************
#                                          SPLIT DATA
# **********************************************************************************************************************
print('Split Data')
# This script generates the training set
# and the held out set
x_train, x_test, y_train, y_test = train_test_split(x, y, random_state=SEED, test_size=0.2, stratify=y)

np.save("x_train.npy", x_train)
np.save("y_train.npy", y_train)  # saves datasets as .npy files
np.save("x_test.npy", x_test)
np.save("y_test.npy", y_test)

# summarize dataset shape
print('Train shape', x_train.shape, y_train.shape)
print('Test shape', x_test.shape, y_test.shape)
# Train shape (6885, 50, 50, 3) (6885,)
# Test shape (1722, 50, 50, 3) (1722,)

# summarize pixel values
print('Train pixels', x_train.min(), x_train.max(), x_train.mean(), x_train.std())
print('Test pixels', x_test.min(), x_test.max(), x_test.mean(), x_test.std())
# Train pixels 0 255 129.69422568869524 68.92910646179355
# Test pixels 0 255 129.9020098335269 69.49813333178977

print('Training data shape : ', x_train.shape, y_train.shape)

print('Testing data shape : ', x_test.shape, y_test.shape)

# Find the unique numbers from the train labels
classes = np.unique(y_train)
nClasses = len(classes)
print('Total number of outputs : ', nClasses)
print('Output classes : ', classes)

# %% ----------------------------------- Hyper Parameters --------------------------------------------------------------
print('Set Hyperparameters')
# a model hyperparameter is a configuration that is external to the model and whose value cannot be estimated from data

LR = 1e-3  # learning rate
N_NEURONS = 4  # number of neurons
N_EPOCHS = 50  # number of epochs - # of passes through the entire training dataset the mL algorithm has completed
BATCH_SIZE = 100  # batch size - defines the scope of our data upfront. Limits the number of samples to be shown
#      to the network before a weight update can be performed.
#      The same limitation is then imposed when making predictions with the fit model.
#      Batch size used when fitting the model controls how many predictions you must make at a time.
#  https://machinelearningmastery.com/use-different-batch-sizes-training-predicting-python-keras/
DROPOUT = 0.2  # dropout - term used for a technique which drops out some nodes of the network.
#       Dropping out can be seen as temporarily deactivating or ignoring neurons of the network.
#       This technique is applied in the training phase to reduce overfitting effects.
#       Basic idea behind dropout is to dropout nodes so that the network can concentrate on other features.
#       https://www.python-course.eu/neural_networks_with_dropout.php
# Iterations is number of batches needed to complete one epoch.

# %% -------------------------------------- Data Prep ------------------------------------------------------------------
print('Data Prep')
# load training data from the .npy files created by Numpy.  loads data faster with .npy
# x_train, y_train, x_test, y_test = np.load("x_train.npy"), np.load("y_train.npy"), np.load("x_test.npy"), np.load(
 #   "y_test.npy")
print('x_train shape: ', x_train.shape[1:])
print('x_test shape: ', x_test.shape[1:])
print('find shape of input')
# find shape of input image then reshape it into input format for training and testing sets
nRows, nCols, nDims = x_train.shape[1:]
train_data = x_train.reshape(x_train.shape[0], nRows, nCols, nDims)
test_data = x_test.reshape(x_test.shape[0], nRows, nCols, nDims)
input_shape = (nRows, nCols, nDims)

print('train_data shape: ', train_data.shape[1:])
print('test_data shape: ', test_data.shape[1:])

# then change all datatypes into floats
train_data = train_data.astype('float32')
test_data = test_data.astype('float32')

# flatten and normalize data
print('flatten and normalize data')
train_data = train_data.reshape(len(train_data), -1)
test_data = test_data.reshape(len(test_data), -1)
train_data /= 255
test_data /= 255

train_labels_one_hot = to_categorical(y_train)
test_labels_one_hot = to_categorical(y_test)

print('Original label 0 : ', y_train[0])
print('After conversion to categorical ( one-hot ) : ', train_labels_one_hot[0])

# %%-------------------------------will want to eventually do this data augmentation--------------------------
# data augmentation later
# %% -------------------------------------- Training Prep ----------------------------------------------------------
print('Training Prep')
print('Create Model')
# this returns a tensor
inputs = Input(shape=input_shape)
output_1 = Dense(N_NEURONS, activation="relu")(inputs)
output_2 = Dense(N_NEURONS, activation="relu")(output_1)
predictions = Dense(nClasses, activation="softmax")(output_2)

# This creates a model that includes the Input layer and three Dense layers
model = Model(inputs=inputs, outputs = predictions)
model.compile(optimizer='rmsprop',
              loss='categorical_crossentropy',
              metrics=['accuracy'])

# Before training a model, you need to configure the learning process, which is done via the compile method.
# It receives three arguments:
# 1. optimizer
# 2. loss function - objective that the model will try to minimize
# 3. list of metrics - for classification problem, set to metrics=['accuracy']
model.compile(optimizers.Adam(lr=LR), loss="categorical_crossentropy", metrics=["accuracy"])

model.summary()
# %% -------------------------------------- Training Loop ----------------------------------------------------------
print('training Loop')
model.fit(
    train_data,
    train_labels_one_hot,
    batch_size=BATCH_SIZE,
    epochs=N_EPOCHS,
    validation_data=(test_data, test_labels_one_hot),
    callbacks=[ModelCheckpoint("mlp_user1.hdf5", monitor="val_loss", save_best_only=True)])

# modelcheckpoint sets check-point in the learning model during training in Python using the Keras library
# a snapshot of the state of the system is taken in case of system failure.  It may be used directly, or as the
# starting point for a new run, picking up where it left off.
# The checkpoint is the weights of the model.  These weights can be used to make predictions as is,
# or used as the basis for ongoing training.
# https://machinelearningmastery.com/check-point-deep-learning-models-keras/

# %% ------------------------------------------ Final test -------------------------------------------------------------
print('Final Test')
print("Final accuracy on validations set:", 100 * model.evaluate(x_test, y_test)[1], "%")
print("Cohen Kappa", cohen_kappa_score(np.argmax(model.predict(x_test), axis=1), np.argmax(y_test, axis=1)))
print("F1 score", f1_score(np.argmax(model.predict(x_test), axis=1), np.argmax(y_test, axis=1), average='macro'))

【问题讨论】:

【参考方案1】:

这里的问题是你为了规范化而扁平化并且忘记将它重塑为相同的旧形状(这条线train_data = train_data.reshape(len(train_data), -1),test_data = test_data.reshape(len(test_data), -1)),这是你扁平化除第一个维度之外的所有维度,然后你使用它的旧维度(在你之前展平)作为输入维度(input_shape = (nRows, nCols, nDims), inputs = Input(shape=input_shape)

【讨论】:

我解释你在说什么,因为我需要将 input_shape 定义移动到我为了规范化而展平的位置之后。我将其更改为input_shape = (train_data) 并将其移到展平代码之后并得到相同的错误。 实际上,我的意思是变量 input_shape = (nRows, nCols, nDims) 有 3 个维度,但您的训练和测试数据是 (nRows, some_number)。所以,要么你改变变量input_shape 要么重塑你的``` test_data``` 来拥有(batch、nRows、nCols、nDims)维度。 所以我现在更好地查看错误消息。它说它需要 x 维,但有一个形状为 (x,y,z) 的数组。 (x,y,z) 是我正在查看的数据,而期望是它认为我通过我设置的 input_shape 参数获得的数据。我一直在努力使 input_shape 参数成为我的数据。我明白,但不明白,其他东西被添加到我的 input_shape 中,以与我的数据相同的形状。即使我把它放在那里一次但没有用,我只把 7500 放在张量的格式中,它就起作用了;输入=输入(形状=(7500,))。 非常感谢@Ronakrit W 帮助我查看正确所说的内容,以便我做出适当的更改。然后,我将最终测试中使用的内容更改为 test_data 和 test_labels_one_hot,并且我到达那里的预期和实际之间的错误也消失了。现在开始提高分数和数据增强。 :) 非常感谢!!!!!!

以上是关于Keras 图像分类:检查输入时出错:预期 input_1 有 4 个维度,但得到了形状为 (6885、7500) 的数组的主要内容,如果未能解决你的问题,请参考以下文章

ValueError:检查目标时出错:预期(keras 序列模型层)具有 n 维,但得到的数组具有形状

ValueError:检查输入时出错:预期 keras_layer_input 有 4 个维度,但得到了形状为 (10, 1) 的数组

批量大小未传递给 tf.keras 模型:“检查输入时出错:预期 input1 有 3 个维度,但得到的数组形状为 (a,b)”

ValueError:检查目标时出错:预期dense_6的形状为(46,),但数组的形状为(1,)

图像分类器ValueError:检查目标时出错:预期dense_31有2维,但得到的数组形状为(1463、224、224、3)

ValueError:检查输入时出错:预期 input_1 有 4 个维度,但得到的数组具有形状(无、无、无)