为啥训练准确率和验证准确率会波动并缓慢增加?

Posted

技术标签:

【中文标题】为啥训练准确率和验证准确率会波动并缓慢增加?【英文标题】:why training accuracy and validation accuracy oscillate and increase very slowly?为什么训练准确率和验证准确率会波动并缓慢增加? 【发布时间】:2022-01-15 19:13:44 【问题描述】:

我正在研究一个文本分类问题,我面临着训练和验证准确度增加极其缓慢的问题,并且它也存在波动,请帮助我解决我们的问题。

以下是我的数据划分和模型的信息

x_train (47723) x_test (3372) x_valid (1631) 批量大小 = 64 纪元 = 50

型号

import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

maxlen=2500
max_features= 40000
embedding_dim=100
num_filters=128
filter_sizes = [1,3,5]
inp = keras.layers.Input(shape=(2500,), dtype="int32")
embedding_layer = keras.layers.Embedding(max_features,
                            embedding_dim,
                             weights=[embedding_matrix],
                            input_length=maxlen,
                            trainable=False)(inp)


print(embedding_layer.shape)
reshape = Reshape((maxlen,embedding_dim,1))(embedding_layer)
print(reshape.shape)
conv_0 = Conv2D(num_filters, kernel_size=(filter_sizes[0], embedding_dim), padding='valid', kernel_initializer='normal', activation='relu')(reshape)
conv_1 = Conv2D(num_filters, kernel_size=(filter_sizes[1], embedding_dim), padding='valid', kernel_initializer='normal', activation='relu')(reshape)
conv_2 = Conv2D(num_filters, kernel_size=(filter_sizes[2], embedding_dim), padding='valid', kernel_initializer='normal', activation='relu')(reshape)

maxpool_0 = MaxPool2D(pool_size=(maxlen - filter_sizes[0] + 1, 1), strides=(1,1), padding='valid')(conv_0)
maxpool_1 = MaxPool2D(pool_size=(maxlen - filter_sizes[1] + 1, 1), strides=(1,1), padding='valid')(conv_1)
maxpool_2 = MaxPool2D(pool_size=(maxlen - filter_sizes[2] + 1, 1), strides=(1,1), padding='valid')(conv_2)


concatenated_tensor = Concatenate(axis=1)([maxpool_0, maxpool_1, maxpool_2])
newdim = tuple([x for x in concatenated_tensor.shape.as_list() if x != 1 and x is not None])
reshape_layer = Reshape(newdim) (concatenated_tensor)
attention = Attention()(reshape_layer)
dropout = Dropout(0.2)(attention)
output = Dense(units=8922, activation='sigmoid')(dropout)

# this creates a model that includes
model = keras.models.Model(inputs=inp, outputs=output)

model.summary()

模型训练

Train...
Epoch 1/50
746/746 [==============================] - 302s 392ms/step - loss: 0.0115 - f1_m: 0.0470 - val_loss: 0.0093 - val_f1_m: 0.0754
Epoch 2/50
746/746 [==============================] - 292s 391ms/step - loss: 0.0080 - f1_m: 0.1368 - val_loss: 0.0086 - val_f1_m: 0.1276
Epoch 3/50
746/746 [==============================] - 292s 391ms/step - loss: 0.0075 - f1_m: 0.1763 - val_loss: 0.0082 - val_f1_m: 0.1459
Epoch 4/50
746/746 [==============================] - 292s 391ms/step - loss: 0.0072 - f1_m: 0.2030 - val_loss: 0.0079 - val_f1_m: 0.1566
Epoch 5/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0070 - f1_m: 0.2232 - val_loss: 0.0077 - val_f1_m: 0.1654
Epoch 6/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0069 - f1_m: 0.2370 - val_loss: 0.0076 - val_f1_m: 0.1794
Epoch 7/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0067 - f1_m: 0.2485 - val_loss: 0.0074 - val_f1_m: 0.2144
Epoch 8/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0066 - f1_m: 0.2603 - val_loss: 0.0073 - val_f1_m: 0.2255
Epoch 9/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0065 - f1_m: 0.2689 - val_loss: 0.0073 - val_f1_m: 0.2131
Epoch 10/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0064 - f1_m: 0.2759 - val_loss: 0.0072 - val_f1_m: 0.2311
Epoch 11/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0064 - f1_m: 0.2841 - val_loss: 0.0072 - val_f1_m: 0.2394
Epoch 12/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0063 - f1_m: 0.2892 - val_loss: 0.0072 - val_f1_m: 0.2298
Epoch 13/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0062 - f1_m: 0.2938 - val_loss: 0.0071 - val_f1_m: 0.2480
Epoch 14/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0062 - f1_m: 0.2991 - val_loss: 0.0071 - val_f1_m: 0.2436
Epoch 15/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0061 - f1_m: 0.3018 - val_loss: 0.0070 - val_f1_m: 0.2595
Epoch 16/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0061 - f1_m: 0.3062 - val_loss: 0.0070 - val_f1_m: 0.2679
Epoch 17/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0060 - f1_m: 0.3095 - val_loss: 0.0070 - val_f1_m: 0.2566
Epoch 18/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0060 - f1_m: 0.3138 - val_loss: 0.0070 - val_f1_m: 0.2655
Epoch 19/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0060 - f1_m: 0.3166 - val_loss: 0.0070 - val_f1_m: 0.2498
Epoch 20/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0059 - f1_m: 0.3192 - val_loss: 0.0070 - val_f1_m: 0.2789
Epoch 21/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0059 - f1_m: 0.3210 - val_loss: 0.0071 - val_f1_m: 0.2638
Epoch 22/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0059 - f1_m: 0.3236 - val_loss: 0.0071 - val_f1_m: 0.2537
Epoch 23/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0058 - f1_m: 0.3255 - val_loss: 0.0069 - val_f1_m: 0.2832
Epoch 24/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0058 - f1_m: 0.3268 - val_loss: 0.0070 - val_f1_m: 0.2770
Epoch 25/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0058 - f1_m: 0.3291 - val_loss: 0.0070 - val_f1_m: 0.2723
Epoch 26/50
746/746 [==============================] - 291s 391ms/step - loss: 0.0058 - f1_m: 0.3300 - val_loss: 0.0069 - val_f1_m: 0.2864
Epoch 27/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0058 - f1_m: 0.3314 - val_loss: 0.0069 - val_f1_m: 0.2779
Epoch 28/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0058 - f1_m: 0.3324 - val_loss: 0.0070 - val_f1_m: 0.2778
Epoch 29/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0057 - f1_m: 0.3344 - val_loss: 0.0070 - val_f1_m: 0.2824
Epoch 30/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0057 - f1_m: 0.3353 - val_loss: 0.0070 - val_f1_m: 0.2656
Epoch 31/50
746/746 [==============================] - 292s 391ms/step - loss: 0.0057 - f1_m: 0.3367 - val_loss: 0.0070 - val_f1_m: 0.2889
Epoch 32/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0057 - f1_m: 0.3376 - val_loss: 0.0070 - val_f1_m: 0.2822
Epoch 33/50
746/746 [==============================] - 292s 391ms/step - loss: 0.0057 - f1_m: 0.3386 - val_loss: 0.0070 - val_f1_m: 0.2904
Epoch 34/50
746/746 [==============================] - 292s 392ms/step - loss: 0.0057 - f1_m: 0.3397 - val_loss: 0.0070 - val_f1_m: 0.2849
Epoch 35/50
746/746 [==============================] - 292s 391ms/step - loss: 0.0057 - f1_m: 0.3407 - val_loss: 0.0070 - val_f1_m: 0.2767
Epoch 36/50
746/746 [==============================] - 292s 391ms/step - loss: 0.0057 - f1_m: 0.3415 - val_loss: 0.0069 - val_f1_m: 0.2957
Epoch 37/50
746/746 [==============================] - 292s 391ms/step - loss: 0.0057 - f1_m: 0.3426 - val_loss: 0.0070 - val_f1_m: 0.2946
Epoch 38/50
746/746 [==============================] - 291s 390ms/step - loss: 0.0056 - f1_m: 0.3441 - val_loss: 0.0070 - val_f1_m: 0.2793
Epoch 39/50
746/746 [==============================] - 292s 391ms/step - loss: 0.0056 - f1_m: 0.3443 - val_loss: 0.0070 - val_f1_m: 0.2928
Epoch 40/50
746/746 [==============================] - 292s 391ms/step - loss: 0.0056 - f1_m: 0.3449 - val_loss: 0.0070 - val_f1_m: 0.2852
Epoch 41/50
746/746 [==============================] - 292s 391ms/step - loss: 0.0056 - f1_m: 0.3447 - val_loss: 0.0070 - val_f1_m: 0.2883
Epoch 42/50
746/746 [==============================] - 292s 392ms/step - loss: 0.0056 - f1_m: 0.3468 - val_loss: 0.0069 - val_f1_m: 0.3026
Epoch 43/50
333/746 [============>.................] - ETA: 2:40 - loss: 0.0056 - f1_m: 0.3519

【问题讨论】:

【参考方案1】:

您是否尝试过提高学习率?这可能是它学得这么慢的一个原因。

【讨论】:

是的,最初是 0.001,然后我改为 0.01 和 0.1 发生同样的问题。【参考方案2】:

是否有任何理由在您的架构中使用卷积层?我认为您的数据格式是文本,而卷积层用于图像数据。

我认为你应该尝试只使用密集层,看看准确性是否有任何变化。

【讨论】:

感谢您的回答,但卷积网络对文本分类数据也很有效。我面临的问题是训练和验证准确性非常慢。如果 conv nets 有问题,那么模型还没有学习,但目前它的学习速度非常慢。

以上是关于为啥训练准确率和验证准确率会波动并缓慢增加?的主要内容,如果未能解决你的问题,请参考以下文章

为啥在第 35 个 epoch 之后训练和验证的准确率会随着小幅度的下降而上升?

如何解释模型学习曲线(训练和验证数据的时代准确率和损失曲线)?

PyTorch:为啥在训练时期循环内部或外部调用验证准确性会发生变化?

为啥在第一个 epoch 验证准确度高于训练准确度?

为啥损失减少而准确率却没有增加? PyTorch

为啥在 Keras 中使用前馈神经网络进行单独的训练、验证和测试数据集可以获得 100% 的准确率?