使用 scikit-learn 对具有多个输入的 Keras 模型进行交叉验证

Posted

技术标签:

【中文标题】使用 scikit-learn 对具有多个输入的 Keras 模型进行交叉验证【英文标题】:Crossvalidation of Keras model with multiply inputs with scikit-learn 【发布时间】:2020-04-08 12:55:50 【问题描述】:

我想将 K-Fold 交叉验证应用于我的神经网络模型,如下所示:

from sklearn.model_selection import StratifiedKFold 
from numpy import *
from keras.wrappers.scikit_learn import KerasClassifier
from sklearn.model_selection import cross_val_score
import numpy

X = df.iloc[:,0:10165]  
X = X.to_numpy()                      
X = X.reshape([X.shape[0], X.shape[1],1]) 
X_train_1 = X[:,0:10080,:]                     
X_train_2 = X[:,10080:10165,:].reshape(921,85)      
Y = df.iloc[:,10168:10170]
Y = Y.to_numpy()

def my_model():

    inputs_1 = keras.Input(shape=(10080,1))
    layer1 = Conv1D(64,14)(inputs_1)
    layer2 = layers.MaxPool1D(5)(layer1)
    layer3 = Conv1D(64, 14)(layer2)       
    layer4 = layers.GlobalMaxPooling1D()(layer3)

    inputs_2 = keras.Input(shape=(85,))
    layer5 = layers.concatenate([layer4, inputs_2])
    layer6 = Dense(128, activation='relu')(layer5)
    layer7 = Dense(2, activation='softmax')(layer6)

    model_2 = keras.models.Model(inputs = [inputs_1, inputs_2], output = [layer7])
    model_2.summary()    
    adam = keras.optimizers.Adam(lr = 0.0001)
    model_2.compile(loss = 'categorical_crossentropy', optimizer = adam, metrics = ['acc'])
    return model_2    
model_2 = KerasClassifier(build_fn=my_model, epochs=150, batch_size=10, verbose=0)
kfold = StratifiedKFold(n_splits=10, shuffle=True)
results = cross_val_score(model_2, [X_train_1,X_train_2], Y, cv=kfold)
print(results.mean())

得到了这个错误

ValueError                                Traceback (most recent call last)
<ipython-input-44-297145425a53> in <module>()
     42 # evaluate using 10-fold cross validation
     43 kfold = StratifiedKFold(n_splits=10, shuffle=True)
---> 44 results = cross_val_score(model_2, [X_train_1,X_train_2], Y, cv=kfold)
     45 print(results.mean())

3 frames
/usr/local/lib/python3.6/dist-packages/sklearn/utils/validation.py in check_consistent_length(*arrays)
    203     if len(uniques) > 1:
    204         raise ValueError("Found input variables with inconsistent numbers of"
--> 205                          " samples: %r" % [int(l) for l in lengths])
    206 
    207 

ValueError: Found input variables with inconsistent numbers of samples: [2, 921]

每个变量的形状和类型如下所示:

X (921, 10165, 1) numpy.ndarray; 
Y (921, 2) numpy.ndarray; 
X_train_1 (921, 10080, 1) numpy.ndarray; 
X_train_2(921, 85) numpy.ndarray

当我不进行 K-Fold 交叉验证时,模型可以完美运行。 IE。如果我只是适合:

model_2.compile(loss = 'categorical_crossentropy', optimizer = adam, metrics = ['acc']) 
history = model_2.fit([X_train_1,X_train_2], y_train, epochs = 120, batch_size = 256, validation_split = 0.2, callbacks = [keras.callbacks.EarlyStopping(monitor='val_loss', patience=20)])

因此,我不确定错误消息出了什么问题。任何帮助表示赞赏。谢谢


编辑: 这是原始模型:

inputs_1 = keras.Input(shape=(10081,1))

layer1 = Conv1D(64,14)(inputs_1)
layer2 = layers.MaxPool1D(5)(layer1)
layer3 = Conv1D(64, 14)(layer2)
layer4 = layers.GlobalMaxPooling1D()(layer3)


inputs_2 = keras.Input(shape=(85,))            
layer5 = layers.concatenate([layer4, inputs_2])
layer6 = Dense(128, activation='relu')(layer5)
layer7 = Dense(2, activation='softmax')(layer6)


model_2 = keras.models.Model(inputs = [inputs_1, inputs_2], output = [layer7])
model_2.summary()


X_train, X_test, y_train, y_test = train_test_split(df.iloc[:,0:10166], df[['Result1','Result2']].values, test_size=0.2)     

X_train = X_train.to_numpy()
X_train = X_train.reshape([X_train.shape[0], X_train.shape[1], 1])
X_train_1 = X_train[:,0:10081,:]
X_train_2 = X_train[:,10081:10166,:].reshape(736,85)  


X_test = X_test.to_numpy()
X_test = X_test.reshape([X_test.shape[0], X_test.shape[1], 1])
X_test_1 = X_test[:,0:10081,:]
X_test_2 = X_test[:,10081:10166,:].reshape(185,85)    


adam = keras.optimizers.Adam(lr = 0.0005) 
model_2.compile(loss = 'categorical_crossentropy', optimizer = adam, metrics = ['acc']) 
history = model_2.fit([X_train_1,X_train_2], y_train, epochs = 120, batch_size = 256, validation_split = 0.2, callbacks = [keras.callbacks.EarlyStopping(monitor='val_loss', patience=20)])

【问题讨论】:

你对[X_train_1,X_train_2]的意图是什么? @Geeocode inputs 被拆分为inputs_1inputs_2,因为前者是时间序列数据,后者是统计数据。鉴于数据的性质非常不同,我将它们拆分为不同层分别处理它们 请在下面查看我编辑的答案。 【参考方案1】:

Scikit 的 cross_val_score 抱怨,因为它检测到您的 X 和 y 的长度不同。那是因为你通过了:

[X_train_1,X_train_2]

X 实际上在 轴 0 上有 2 假“样本”,因为它是一个包含两个成员的列表。相反,yaxis 0 上有 921 样本。

编辑:

经过一番研究,我发现 sklearn 的 split() 方法既不支持多输入数据也不支持one-hot编码标签

解决方案:

因此,作为一种解决方法,您可以使用 sklearn 进行自己的交叉验证,如下所示:

首先导入并定义我们需要的一切:

from sklearn.model_selection import StratifiedKFold
import numpy as np
import keras
from keras import layers
from keras.layers import Conv1D, Dense
from keras.utils.np_utils import to_categorical

# This is just for dummy data ##################################
X_train_1 = np.random.randint(0, 10000, (921, 10080, 1))
X_train_2 = np.random.randint(0, 10000, (921, 85))
Y_kat = np.random.randint(0, 2, (921))
Y = to_categorical(Y_kat, num_classes=2)
# This is just for dummy data ##################################


def my_model():

    inputs_1 = keras.Input(shape=(10080, 1))
    layer1 = Conv1D(64,14)(inputs_1)
    layer2 = layers.MaxPool1D(5)(layer1)
    layer3 = Conv1D(64, 14)(layer2)       
    layer4 = layers.GlobalMaxPooling1D()(layer3)

    inputs_2 = keras.Input(shape=(85,))
    layer5 = layers.concatenate([layer4, inputs_2])
    layer6 = Dense(128, activation='relu')(layer5)
    layer7 = Dense(2, activation='softmax')(layer6)

    model_2 = keras.models.Model(inputs = [inputs_1, inputs_2], output = [layer7])
    # model_2.summary()    
    adam = keras.optimizers.Adam(lr = 0.0001)
    model_2.compile(loss='categorical_crossentropy', optimizer=adam, metrics=['acc'])
    return model_2    

我们来看看实际的解决方案:

# We need convert one_hot encoded labels to categorical labels for skf
Y_kat = np.argmax(Y, axis=1)

n_folds = 5
skf = StratifiedKFold(n_splits=n_folds, shuffle=True)
skf = skf.split(X_train_1, Y_kat)

cv_score = []

for i, (train, test) in enumerate(skf):
    # currently keras doesn't have like model.reset(), so the easiest way
    # recompiling our model in every step of the loop see below more
    # create model
    model_2 = my_model()

    print("Running Fold", i+1, "/", n_folds)
    model_2.fit([X_train_1[train], X_train_2[train]], Y[train], epochs=150, batch_size=10)
    result = model_2.evaluate([X_train_1[test], X_train_2[test]], Y[test])
    # if we want only the accuracy metric
    cv_score.append(result[1])
    # we have to clear previous model to reset weights
    # currently keras doesn't have like model.reset()
    keras.backend.clear_session()

print("\nMean accuracy of the crossvalidation: ".format(np.mean(cv_score)))

输出:

Mean accuracy of the crossvalidation: 0.5049177408218384

希望对你有帮助。

【讨论】:

是的,我想是的。这是否意味着cross_val_score 不允许将 2 个训练数据组合成一个?有什么好的选择? 请参考原始模型的编辑问题。我已添加模型以获取更多背景信息 @nilsinelabore 你能试试list(zip(X_train_1, X_train_2))吗?我不确定,但它可能会起作用。你可能会得到另一个错误,但它是一些东西。长度不一致可能会解决,但 keras 可能不会接受。 感谢@Geeocode 的回答。我真的很欣赏自己进行交叉验证的想法,尽管它引发了另一个问题(请参阅问题更新)。输入数据似乎有类似的问题。我会从那里继续努力。如果您有其他建议,请告诉我。谢谢 @nilsinelabore list(zip()) 不幸地在这里不起作用。

以上是关于使用 scikit-learn 对具有多个输入的 Keras 模型进行交叉验证的主要内容,如果未能解决你的问题,请参考以下文章

如何使用 scikit-learn 组合具有不同维度输出的特征

使用 scikit-learn 分类到多个类别

scikit-learn 在管道中使用多个类预处理 SVM

在 scikit-learn 中平均多个随机森林模型

scikit-learn 中跨多个列的标签编码

使用 Scikit-learn KMeans 对多维数组进行聚类