使用 Keras 进行深度学习 - 训练时没有学习率

Posted

技术标签:

【中文标题】使用 Keras 进行深度学习 - 训练时没有学习率【英文标题】:Deep Learning with Keras - no learning rate with training 【发布时间】:2018-10-25 17:14:38 【问题描述】:

我正在使用 Keras 在股票数据上创建我的第一个模型,使用技术指标作为模型的输入,并注意到我几乎没有看到学习率 - 损失没有变化,准确性也没有变化。 由于我是 DL 和 Keras 的新手,我可能会忽略一些显而易见的事情,但在这里寻求帮助。

代码 sn-p 和训练输出如下:

# Model definitions
model = Sequential()
model.add(Dense(1, input_dim=3))
model.add(Activation(activation='sigmoid'))
model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
data = trainingsetdata.as_matrix()
labels = trainingsetlabel.as_matrix()
score = model.evaluate(data, labels, batch_size=32, verbose=1)
print(score)
model.fit(data, labels, batch_size=32, epochs=100, validation_split=0.05, verbose=2)
score = model.evaluate(data, labels, batch_size=32, verbose=1)
print(score)

[0.694263961315155, 0.4875]
Train on 380 samples, validate on 20 samples
Epoch 1/100
 - 0s - loss: 0.6939 - acc: 0.4605 - val_loss: 0.6900 - val_acc: 0.4000
Epoch 2/100
 - 0s - loss: 0.6934 - acc: 0.5079 - val_loss: 0.6882 - val_acc: 0.6000
Epoch 3/100
 - 0s - loss: 0.6932 - acc: 0.5211 - val_loss: 0.6867 - val_acc: 0.7000
Epoch 4/100
 - 0s - loss: 0.6929 - acc: 0.5289 - val_loss: 0.6858 - val_acc: 0.7000
Epoch 5/100
 - 0s - loss: 0.6929 - acc: 0.5237 - val_loss: 0.6850 - val_acc: 0.7000
Epoch 6/100
 - 0s - loss: 0.6928 - acc: 0.5263 - val_loss: 0.6841 - val_acc: 0.7000
Epoch 7/100
 - 0s - loss: 0.6927 - acc: 0.5184 - val_loss: 0.6836 - val_acc: 0.7000
Epoch 8/100
 - 0s - loss: 0.6926 - acc: 0.5184 - val_loss: 0.6828 - val_acc: 0.7000
Epoch 9/100
 - 0s - loss: 0.6925 - acc: 0.5105 - val_loss: 0.6823 - val_acc: 0.7000
Epoch 10/100
 - 0s - loss: 0.6925 - acc: 0.5079 - val_loss: 0.6816 - val_acc: 0.7000
Epoch 11/100
 - 0s - loss: 0.6923 - acc: 0.5132 - val_loss: 0.6808 - val_acc: 0.7000
Epoch 12/100
 - 0s - loss: 0.6923 - acc: 0.5105 - val_loss: 0.6800 - val_acc: 0.7000
Epoch 13/100
 - 0s - loss: 0.6923 - acc: 0.5105 - val_loss: 0.6793 - val_acc: 0.7000
Epoch 14/100
 - 0s - loss: 0.6922 - acc: 0.5105 - val_loss: 0.6789 - val_acc: 0.7000
Epoch 15/100
 - 0s - loss: 0.6922 - acc: 0.5105 - val_loss: 0.6783 - val_acc: 0.7000
Epoch 16/100
 - 0s - loss: 0.6921 - acc: 0.5132 - val_loss: 0.6780 - val_acc: 0.7000
Epoch 17/100
 - 0s - loss: 0.6922 - acc: 0.5132 - val_loss: 0.6774 - val_acc: 0.7000
Epoch 18/100
 - 0s - loss: 0.6921 - acc: 0.5132 - val_loss: 0.6771 - val_acc: 0.7000
Epoch 19/100
 - 0s - loss: 0.6920 - acc: 0.5132 - val_loss: 0.6768 - val_acc: 0.7000
Epoch 20/100
 - 0s - loss: 0.6922 - acc: 0.5132 - val_loss: 0.6768 - val_acc: 0.7000
Epoch 21/100
 - 0s - loss: 0.6921 - acc: 0.5132 - val_loss: 0.6766 - val_acc: 0.6500
Epoch 22/100
 - 0s - loss: 0.6921 - acc: 0.5132 - val_loss: 0.6764 - val_acc: 0.6500
Epoch 23/100
 - 0s - loss: 0.6920 - acc: 0.5132 - val_loss: 0.6762 - val_acc: 0.6500
Epoch 24/100
 - 0s - loss: 0.6922 - acc: 0.5132 - val_loss: 0.6762 - val_acc: 0.6500
Epoch 25/100
 - 0s - loss: 0.6920 - acc: 0.5132 - val_loss: 0.6761 - val_acc: 0.6500
Epoch 26/100
 - 0s - loss: 0.6921 - acc: 0.5132 - val_loss: 0.6759 - val_acc: 0.6500
Epoch 27/100
 - 0s - loss: 0.6921 - acc: 0.5132 - val_loss: 0.6758 - val_acc: 0.6500
Epoch 28/100
 - 0s - loss: 0.6921 - acc: 0.5132 - val_loss: 0.6757 - val_acc: 0.6500
Epoch 29/100
 - 0s - loss: 0.6920 - acc: 0.5132 - val_loss: 0.6757 - val_acc: 0.6500
Epoch 30/100
 - 0s - loss: 0.6920 - acc: 0.5158 - val_loss: 0.6758 - val_acc: 0.6500
Epoch 31/100
 - 0s - loss: 0.6921 - acc: 0.5132 - val_loss: 0.6756 - val_acc: 0.6500
Epoch 32/100
 - 0s - loss: 0.6922 - acc: 0.5132 - val_loss: 0.6757 - val_acc: 0.6500
Epoch 33/100
 - 0s - loss: 0.6920 - acc: 0.5132 - val_loss: 0.6756 - val_acc: 0.6500
Epoch 34/100
 - 0s - loss: 0.6920 - acc: 0.5132 - val_loss: 0.6755 - val_acc: 0.6500
Epoch 35/100
 - 0s - loss: 0.6921 - acc: 0.5132 - val_loss: 0.6757 - val_acc: 0.6500
Epoch 36/100
 - 0s - loss: 0.6920 - acc: 0.5132 - val_loss: 0.6755 - val_acc: 0.6500
Epoch 37/100
 - 0s - loss: 0.6920 - acc: 0.5132 - val_loss: 0.6754 - val_acc: 0.6500
Epoch 38/100
 - 0s - loss: 0.6921 - acc: 0.5158 - val_loss: 0.6752 - val_acc: 0.6500
Epoch 39/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6750 - val_acc: 0.6500
Epoch 40/100
 - 0s - loss: 0.6920 - acc: 0.5158 - val_loss: 0.6749 - val_acc: 0.6500
Epoch 41/100
 - 0s - loss: 0.6921 - acc: 0.5132 - val_loss: 0.6749 - val_acc: 0.6500
Epoch 42/100
 - 0s - loss: 0.6921 - acc: 0.5237 - val_loss: 0.6749 - val_acc: 0.6500
Epoch 43/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6749 - val_acc: 0.6500
Epoch 44/100
 - 0s - loss: 0.6920 - acc: 0.5158 - val_loss: 0.6748 - val_acc: 0.6500
Epoch 45/100
 - 0s - loss: 0.6920 - acc: 0.5158 - val_loss: 0.6749 - val_acc: 0.6500
Epoch 46/100
 - 0s - loss: 0.6921 - acc: 0.5263 - val_loss: 0.6746 - val_acc: 0.6500
Epoch 47/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6746 - val_acc: 0.6500
Epoch 48/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6745 - val_acc: 0.6500
Epoch 49/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6744 - val_acc: 0.6500
Epoch 50/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6744 - val_acc: 0.6500
Epoch 51/100
 - 0s - loss: 0.6921 - acc: 0.5132 - val_loss: 0.6747 - val_acc: 0.6500
Epoch 52/100
 - 0s - loss: 0.6920 - acc: 0.5132 - val_loss: 0.6745 - val_acc: 0.6500
Epoch 53/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6746 - val_acc: 0.6500
Epoch 54/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6747 - val_acc: 0.6500
Epoch 55/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6747 - val_acc: 0.6500
Epoch 56/100
 - 0s - loss: 0.6921 - acc: 0.5158 - val_loss: 0.6747 - val_acc: 0.6500
Epoch 57/100
 - 0s - loss: 0.6921 - acc: 0.5237 - val_loss: 0.6745 - val_acc: 0.6500
Epoch 58/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6743 - val_acc: 0.6500
Epoch 59/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6743 - val_acc: 0.6500
Epoch 60/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 61/100
 - 0s - loss: 0.6920 - acc: 0.5237 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 62/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 63/100
 - 0s - loss: 0.6920 - acc: 0.5237 - val_loss: 0.6741 - val_acc: 0.6500
Epoch 64/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6741 - val_acc: 0.6500
Epoch 65/100
 - 0s - loss: 0.6920 - acc: 0.5237 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 66/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6741 - val_acc: 0.6500
Epoch 67/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6741 - val_acc: 0.6500
Epoch 68/100
 - 0s - loss: 0.6921 - acc: 0.5184 - val_loss: 0.6742 - val_acc: 0.6500

Epoch 69/100
 - 0s - loss: 0.6921 - acc: 0.5158 - val_loss: 0.6743 - val_acc: 0.6500
Epoch 70/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6743 - val_acc: 0.6500
Epoch 71/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 72/100
 - 0s - loss: 0.6921 - acc: 0.5184 - val_loss: 0.6741 - val_acc: 0.6500
Epoch 73/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6739 - val_acc: 0.6500
Epoch 74/100
 - 0s - loss: 0.6920 - acc: 0.5237 - val_loss: 0.6741 - val_acc: 0.6500
Epoch 75/100
 - 0s - loss: 0.6920 - acc: 0.5237 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 76/100
 - 0s - loss: 0.6920 - acc: 0.5132 - val_loss: 0.6743 - val_acc: 0.6500
Epoch 77/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 78/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 79/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 80/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 81/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6743 - val_acc: 0.6500
Epoch 82/100
 - 0s - loss: 0.6921 - acc: 0.5184 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 83/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 84/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6743 - val_acc: 0.6500
Epoch 85/100
 - 0s - loss: 0.6920 - acc: 0.5158 - val_loss: 0.6743 - val_acc: 0.6500
Epoch 86/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6743 - val_acc: 0.6500
Epoch 87/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 88/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 89/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6743 - val_acc: 0.6500
Epoch 90/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 91/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6741 - val_acc: 0.6500
Epoch 92/100
 - 0s - loss: 0.6920 - acc: 0.5184 - val_loss: 0.6739 - val_acc: 0.6500
Epoch 93/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6740 - val_acc: 0.6500
Epoch 94/100
 - 0s - loss: 0.6920 - acc: 0.5211 - val_loss: 0.6741 - val_acc: 0.6500
Epoch 95/100
 - 0s - loss: 0.6920 - acc: 0.5237 - val_loss: 0.6740 - val_acc: 0.6500
Epoch 96/100
 - 0s - loss: 0.6921 - acc: 0.5263 - val_loss: 0.6738 - val_acc: 0.6500
Epoch 97/100
 - 0s - loss: 0.6920 - acc: 0.5263 - val_loss: 0.6739 - val_acc: 0.6500
Epoch 98/100
 - 0s - loss: 0.6920 - acc: 0.5237 - val_loss: 0.6741 - val_acc: 0.6500
Epoch 99/100
 - 0s - loss: 0.6920 - acc: 0.5237 - val_loss: 0.6742 - val_acc: 0.6500
Epoch 100/100
 - 0s - loss: 0.6920 - acc: 0.5132 - val_loss: 0.6741 - val_acc: 0.6500
400/400 [==============================] - 0s 63us/step
[0.6910193943977356, 0.5275]

【问题讨论】:

我很困惑你要什么。您是否在问在哪里可以定义 RMSProp 优化算法的学习率?或者您是否正在就您的模型为什么没有学会解决您手头的问题寻求建议? 我确实在问为什么我的模型没有学习。下面的米林德回答了这个问题。我添加了他的建议,仍然没有看到太多的学习,但这可能与数据有关。我没有经验来估计损失和准确性方面的改进预期。 【参考方案1】:

我绝对不确定您的模型缺乏学习的原因是什么,可能有很多原因。对您手头的问题的解释会有所帮助,并且可能是理解模型为什么没有学习的关键。我将对输入数据和机器学习理论做一些初步猜测:

输入数据

    输入数据是否按照所有特征正确归一化?如果特征没有被规范化(或标准化),一个特征可能具有更大范围的值并且使其余特征黯然失色,因此模型只考虑一个特征。

机器学习理论

    我会尝试使用具有至少 2 层非线性激活函数的模型:其背后的原因是,如果模型至少没有 2 层非线性激活函数,则该模型将是线性的激活(sigmoid 是非线性激活)。

    每当您遇到新问题并尝试第一次使用 NN 解决时,我建议您使用 Adam 优化器。 Adam 类似于所有优化器的“全自动版本”。在某些情况下,它可能不是最佳选择,但根据我的经验,它是最好的第一个尝试优化器,因为它开箱即用。

希望这能给你一些指导,帮助模型学习一些东西。

【讨论】:

感谢您的反馈。我正在尝试根据研究论文在 Python/Keras 中实现股票预测模型。我确实已经对所有特征的输入进行了规范化。关于模型和优化器的好建议,我将尝试。 @hesemani 请在得到这些新结果时说点什么。

以上是关于使用 Keras 进行深度学习 - 训练时没有学习率的主要内容,如果未能解决你的问题,请参考以下文章

使用Keras深度学习进行图层计数

Keras深度学习实战——基于VGG19模型实现性别分类

使用 BP 神经网络进行深度学习时在训练时获得平坦的误差曲线

深度学习入门,keras实现回归模型

深度学习高级,Keras多输入和混合数据实现回归模型

Keras深度学习实战(10)——迁移学习