Keras evaluate()和predict()结果太过分了
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Keras evaluate()和predict()结果太过分了相关的知识,希望对你有一定的参考价值。
我正在使用keras进行二元分类模型。请参阅下面的数据设置
print(train_x.shape) --(79520,)
print(test_x.shape) --(26507,)
print(train_y.shape) --(79520,)
print(test_y.shape) --(26507,)
我使用LSTM,激活是'sigmoid','binary_crossentrophy'是我的损失函数。
input_layer = layers.Input((100,))
embedding_layer = layers.Embedding(20001, 100)(input_layer)
lstm_layer = layers.Bidirectional(CuDNNLSTM(64,return_sequences=True))(embedding_layer)
pooling_layer = layers.GlobalMaxPool1D()(lstm_layer)
op_layer = layers.Dense(50, activation='relu')(pooling_layer)
op_layer = layers.Dropout(0.5)(op_layer)
op_layer = layers.Dense(1, activation = 'sigmoid')(op_layer)
model = models.Model(inputs=input_layer, outputs=op_layer)
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.summary()
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) (None, 100) 0
_________________________________________________________________
embedding_1 (Embedding) (None, 100, 100) 2000100
_________________________________________________________________
bidirectional_1 (Bidirection (None, 100, 128) 84992
_________________________________________________________________
global_max_pooling1d_1 (Glob (None, 128) 0
_________________________________________________________________
dense_1 (Dense) (None, 50) 6450
_________________________________________________________________
dropout_1 (Dropout) (None, 50) 0
_________________________________________________________________
dense_2 (Dense) (None, 1) 51
=================================================================
Total params: 2,091,593
Trainable params: 2,091,593
Non-trainable params: 0
_________________________________________________________________
在10个时期结束时,训练精度为0.97,验证精度为0.72左右。
model.fit(train_x, train_y, epochs=10, batch_size=10, validation_split = 0.1)
Train on 71568 samples, validate on 7952 samples
Epoch 1/10
71568/71568 [==============================] - 114s 2ms/step - loss: 0.6014 - acc: 0.6603 - val_loss: 0.5556 - val_acc: 0.7006
Epoch 2/10
71568/71568 [==============================] - 107s 1ms/step - loss: 0.4921 - acc: 0.7573 - val_loss: 0.5449 - val_acc: 0.7194
Epoch 3/10
71568/71568 [==============================] - 107s 1ms/step - loss: 0.3918 - acc: 0.8179 - val_loss: 0.5924 - val_acc: 0.7211
Epoch 4/10
71568/71568 [==============================] - 107s 2ms/step - loss: 0.3026 - acc: 0.8667 - val_loss: 0.6642 - val_acc: 0.7248
Epoch 5/10
71568/71568 [==============================] - 107s 1ms/step - loss: 0.2363 - acc: 0.8963 - val_loss: 0.7322 - val_acc: 0.7271
Epoch 6/10
71568/71568 [==============================] - 107s 2ms/step - loss: 0.1939 - acc: 0.9155 - val_loss: 0.8349 - val_acc: 0.7150
Epoch 7/10
71568/71568 [==============================] - 107s 2ms/step - loss: 0.1621 - acc: 0.9292 - val_loss: 1.0337 - val_acc: 0.7226
Epoch 8/10
71568/71568 [==============================] - 107s 1ms/step - loss: 0.1417 - acc: 0.9375 - val_loss: 0.9998 - val_acc: 0.7221
Epoch 9/10
71568/71568 [==============================] - 107s 1ms/step - loss: 0.1273 - acc: 0.9433 - val_loss: 1.1732 - val_acc: 0.7197
Epoch 10/10
71568/71568 [==============================] - 107s 1ms/step - loss: 0.1138 - acc: 0.9481 - val_loss: 1.1462 - val_acc: 0.7222
scores = model.evaluate(test_x,test_y, verbose=1)
print("%s: %.2f%%" % (model.metrics_names[1], scores[1]*100))
26507/26507 [==============================] - 5s 183us/step
acc: 72.45%
直到这一点,每件事情似乎都很好,当我在测试数据上运行predict()函数时,它会向南移动
pred=model.predict(test_x)
pred=pred.argmax(axis=-1)
print(accuracy_score(pred,test_y)*100)
43.48285358584525
from sklearn.metrics import confusion_matrix
confusion_matrix(test_y, pred)
array([[11526, 0],
[14981, 0]])
我无法理解为什么evaluate()和predict()结果太过分了。你能指出什么是错的吗?我在GPU EC2实例上运行它。以下软件版本。
硬2.2.4 Tensorflow 1.12.0
如果需要有关该模型的任何其他细节,请与我们联系。谢谢
你的acc
和val_acc
相距甚远的事实表明你的模型严重过度训练。一般来说,你想要一个acc
和val_acc
彼此接近的模型。更糟糕的是,loss
和val_loss
之间的混乱是戏剧性的,val_loss
是不稳定的,并且随着实验逐步推进而逐渐增加。这是您希望在培训模型时寻找的类型。非常值得花时间学习过度训练和训练不足以及如何处理这些情况。
此外,精度通常是二进制分类任务的弱指标,因此它可能不是首先训练模型的良好基础。更好地使用类似f1-score的东西,除非你的真假标签接近50/50。你可以找到Keras here的召回,精确度和f1。
以上是关于Keras evaluate()和predict()结果太过分了的主要内容,如果未能解决你的问题,请参考以下文章
Keras evaluate()和predict()结果太过分了
Keras:model.evaluate vs model.predict 多类 NLP 任务中的准确率差异
从 Keras model.predict_generator 计算准确率