Keras:如何保存模型并继续训练?

Posted

技术标签:

【中文标题】Keras:如何保存模型并继续训练?【英文标题】:Keras: How to save model and continue training? 【发布时间】:2018-01-05 16:48:16 【问题描述】:

我有一个已经训练了 40 个 epoch 的模型。我为每个时期保留了检查点,并且我还使用model.save() 保存了模型。训练代码为:

n_units = 1000
model = Sequential()
model.add(LSTM(n_units, input_shape=(None, vec_size), return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units))
model.add(Dropout(0.2))
model.add(Dense(vec_size, activation='linear'))
model.compile(loss='mean_squared_error', optimizer='adam')
# define the checkpoint
filepath="word2vec-epoch:02d-loss:.4f.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
# fit the model
model.fit(x, y, epochs=40, batch_size=50, callbacks=callbacks_list)

但是,当我加载模型并尝试再次对其进行训练时,它会重新开始,就好像它之前没有进行过训练一样。损失不是从上次训练开始的。

让我困惑的是,当我加载模型并重新定义模型结构并使用load_weightmodel.predict() 效果很好。因此,我相信模型权重已加载:

model = Sequential()
model.add(LSTM(n_units, input_shape=(None, vec_size), return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units))
model.add(Dropout(0.2))
model.add(Dense(vec_size, activation='linear'))
filename = "word2vec-39-0.0027.hdf5"
model.load_weights(filename)
model.compile(loss='mean_squared_error', optimizer='adam')

但是,当我继续训练时,损失与初始阶段一样高:

filepath="word2vec-epoch:02d-loss:.4f.hdf5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
# fit the model
model.fit(x, y, epochs=40, batch_size=50, callbacks=callbacks_list)

我搜索并找到了一些保存和加载模型的示例here 和here。但是,它们都不起作用。


更新 1

我查看了this question,试了一下,效果不错:

model.save('partly_trained.h5')
del model
load_model('partly_trained.h5')

但是当我关闭 Python 并重新打开它,然后再次运行 load_model 时,它失败了。损失与初始状态一样高。


更新 2

我试过Yu-Yang's example code 并且它有效。但是,当我再次使用我的代码时,它仍然失败。

这是原始训练的结果。第二个 epoch 应该从 loss = 3.1*** 开始:

13700/13846 [============================>.] - ETA: 0s - loss: 3.0519
13750/13846 [============================>.] - ETA: 0s - loss: 3.0511
13800/13846 [============================>.] - ETA: 0s - loss: 3.0512Epoch 00000: loss improved from inf to 3.05101, saving model to LPT-00-3.0510.h5

13846/13846 [==============================] - 81s - loss: 3.0510    
Epoch 2/60

   50/13846 [..............................] - ETA: 80s - loss: 3.1754
  100/13846 [..............................] - ETA: 78s - loss: 3.1174
  150/13846 [..............................] - ETA: 78s - loss: 3.0745

我关闭 Python,重新打开它,使用 model = load_model("LPT-00-3.0510.h5") 加载模型,然后使用:

filepath="LPT-epoch:02d-loss:.4f.h5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
# fit the model
model.fit(x, y, epochs=60, batch_size=50, callbacks=callbacks_list)

损失从4.54开始:

Epoch 1/60
   50/13846 [..............................] - ETA: 162s - loss: 4.5451
   100/13846 [..............................] - ETA: 113s - loss: 4.3835

【问题讨论】:

您在load_model() 之后拨打了model.compile(optimizer='adam') 吗?如果是这样,请不要这样做。使用选项optimizer='adam'重新编译模型会重置优化器的内部状态(实际上是创建了一个新的Adam优化器实例) 感谢您的回答。但是不,我没有再打电话给model.compile。重新打开python后我所做的只是model = load_model('partly_trained.h5')model.fit(x, y, epochs=20, batch_size=100) 我也尝试重新定义模型结构和model.load_weight('checkpoint.hff5')model.compile(loss='categorical_crossentropy')。但它给出了一个错误,说必须给出优化器。 【参考方案1】:

由于很难弄清楚问题出在哪里,我从您的代码中创建了一个玩具示例,它似乎工作正常。

import numpy as np
from numpy.testing import assert_allclose
from keras.models import Sequential, load_model
from keras.layers import LSTM, Dropout, Dense
from keras.callbacks import ModelCheckpoint

vec_size = 100
n_units = 10

x_train = np.random.rand(500, 10, vec_size)
y_train = np.random.rand(500, vec_size)

model = Sequential()
model.add(LSTM(n_units, input_shape=(None, vec_size), return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units))
model.add(Dropout(0.2))
model.add(Dense(vec_size, activation='linear'))
model.compile(loss='mean_squared_error', optimizer='adam')

# define the checkpoint
filepath = "model.h5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]

# fit the model
model.fit(x_train, y_train, epochs=5, batch_size=50, callbacks=callbacks_list)

# load the model
new_model = load_model(filepath)
assert_allclose(model.predict(x_train),
                new_model.predict(x_train),
                1e-5)

# fit the model
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
new_model.fit(x_train, y_train, epochs=5, batch_size=50, callbacks=callbacks_list)

模型加载后损失继续减少。 (重启python也没有问题)

Using TensorFlow backend.
Epoch 1/5
500/500 [==============================] - 2s - loss: 0.3216     Epoch 00000: loss improved from inf to 0.32163, saving model to model.h5
Epoch 2/5
500/500 [==============================] - 0s - loss: 0.2923     Epoch 00001: loss improved from 0.32163 to 0.29234, saving model to model.h5
Epoch 3/5
500/500 [==============================] - 0s - loss: 0.2542     Epoch 00002: loss improved from 0.29234 to 0.25415, saving model to model.h5
Epoch 4/5
500/500 [==============================] - 0s - loss: 0.2086     Epoch 00003: loss improved from 0.25415 to 0.20860, saving model to model.h5
Epoch 5/5
500/500 [==============================] - 0s - loss: 0.1725     Epoch 00004: loss improved from 0.20860 to 0.17249, saving model to model.h5

Epoch 1/5
500/500 [==============================] - 0s - loss: 0.1454     Epoch 00000: loss improved from inf to 0.14543, saving model to model.h5
Epoch 2/5
500/500 [==============================] - 0s - loss: 0.1289     Epoch 00001: loss improved from 0.14543 to 0.12892, saving model to model.h5
Epoch 3/5
500/500 [==============================] - 0s - loss: 0.1169     Epoch 00002: loss improved from 0.12892 to 0.11694, saving model to model.h5
Epoch 4/5
500/500 [==============================] - 0s - loss: 0.1097     Epoch 00003: loss improved from 0.11694 to 0.10971, saving model to model.h5
Epoch 5/5
500/500 [==============================] - 0s - loss: 0.1057     Epoch 00004: loss improved from 0.10971 to 0.10570, saving model to model.h5

顺便说一句,在load_weight() 后面重新定义模型肯定行不通,因为save_weight()load_weight() 不会保存/加载优化器。

【讨论】:

我试过你的玩具代码,它有效。但是回到我的代码,它仍然失败......我想我做的和你的例子完全一样。我不明白为什么。详情请看我的更新。 只是一个随机猜测,你在模型加载之前和之后使用相同的(x, y)吗? 是的。我真的关闭了 Python 并重新打开,重新加载数据。 @David 那么,问题出在哪里? @David 告诉我们问题出在哪里,DAVIDDDD【参考方案2】:

我将我的代码与此示例进行了比较http://machinelearningmastery.com/text-generation-lstm-recurrent-neural-networks-python-keras/ 通过小心地逐行阻止并再次运行。折腾了一整天,终于找到问题所在了。

在进行char-int映射时,我使用了

# title_str_reduced is a string
chars = list(set(title_str_reduced))
# make char to int index mapping
char2int = 
for i in range(len(chars)):
    char2int[chars[i]] = i    

集合是无序的数据结构。在python中,当一个集合转换为一个有序的列表时,顺序是随机给出的。因此,每次我重新打开 python 时,我的 char2int 字典都是随机的。 我通过添加 sorted() 修复了我的代码

chars = sorted(list(set(title_str_reduced)))

这会强制转换为固定顺序。

【讨论】:

谢谢你。我遇到了完全相同的麻烦。每次重新启动后总是从头开始,但令人难以置信的是,即使在 .save, .load 之后,也不是在同一个会话期间。我自己没有弄清楚,但谢谢你,经过几天的迷失,找到了你的答案,这救了我! :谢谢:【参考方案3】:

这是保存模型的官方kera文档:

https://keras.io/getting-started/faq/#how-can-i-save-a-keras-model

在this post 中,作者提供了两个将模型保存和加载到文件的示例:

JSON 格式。 YAML 格式。

【讨论】:

欢迎提供解决方案的链接,但请确保您的答案在没有它的情况下有用:add context around the link 这样您的其他用户就会知道它是什么以及为什么会出现,然后引用最相关的内容您链接到的页面的一部分,以防目标页面不可用。 Answers that are little more than a link may be deleted. 感谢您的建议,我会记住这一点的。【参考方案4】:

我觉得你会写

model.save('partly_trained.h5' )

model = load_model('partly_trained.h5')

而不是

model = Sequential()
model.add(LSTM(n_units, input_shape=(None, vec_size), return_sequences=True))    
model.add(Dropout(0.2)) 
model.add(LSTM(n_units, return_sequences=True))  
model.add(Dropout(0.2)) 
model.add(LSTM(n_units))
model.add(Dropout(0.2))
model.add(Dense(vec_size, activation='linear')) 
model.compile(loss='mean_squared_error', optimizer='adam')

然后继续训练。 因为model.save 存储架构和权重,您可以在documentation 中阅读。

【讨论】:

这对我有用。这有点欺骗性,因为它从第 1 个时期开始 - 然而 - 它的初始准确度和损失与它在训练中停止的位置(从最后一个检查点开始)一致。因此,如果这很重要,您可能希望减少 epoch 的数量以反映这一点——我无法找到指定“从 epoch X 开始”的方法——但我认为这在很大程度上是装饰性的。【参考方案5】:

假设你有这样的代码:

model = some_model_you_made(input_img) # you compiled your model in this 
model.summary()

model_checkpoint = ModelCheckpoint('yours.h5', monitor='val_loss', verbose=1, save_best_only=True)

model_json = model.to_json()
with open("yours.json", "w") as json_file:
    json_file.write(model_json)

model.fit_generator(#stuff...) # or model.fit(#stuff...)

现在把你的代码变成这样:

model = some_model_you_made(input_img) #same model here
model.summary()

model_checkpoint = ModelCheckpoint('yours.h5', monitor='val_loss', verbose=1, save_best_only=True) #same ckeckpoint

model_json = model.to_json()
with open("yours.json", "w") as json_file:
    json_file.write(model_json)

with open('yours.json', 'r') as f:
    old_model = model_from_json(f.read()) # open the model you just saved (same as your last train) with a different name

old_model.load_weights('yours.h5') # the model checkpoint you trained before
old_model.compile(#stuff...) # need to compile again (exactly like the last compile)

# now start training with the checkpoint...
old_model.fit_generator(#same stuff like the last train) # or model.fit(#stuff...)

【讨论】:

【参考方案6】:

以上答案使用 tensorflow 1.x。这是使用 Tensorflow 2.x 的更新版本。

import numpy as np
from numpy.testing import assert_allclose
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.layers import LSTM, Dropout, Dense
from tensorflow.keras.callbacks import ModelCheckpoint

vec_size = 100
n_units = 10

x_train = np.random.rand(500, 10, vec_size)
y_train = np.random.rand(500, vec_size)

model = Sequential()
model.add(LSTM(n_units, input_shape=(None, vec_size), return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units))
model.add(Dropout(0.2))
model.add(Dense(vec_size, activation='linear'))
model.compile(loss='mean_squared_error', optimizer='adam')

# define the checkpoint
filepath = "model.h5"
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]

# fit the model
model.fit(x_train, y_train, epochs=5, batch_size=50, callbacks=callbacks_list)

# load the model
new_model = load_model("model.h5")
assert_allclose(model.predict(x_train),
                new_model.predict(x_train),
                1e-5)

# fit the model
checkpoint = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
callbacks_list = [checkpoint]
new_model.fit(x_train, y_train, epochs=5, batch_size=50, callbacks=callbacks_list)

【讨论】:

好吧,由于model.predict(x_train) 不等于new_model.predict(x_train),这段代码在我的模型上给出了一个错误 - 反过来也是如此,实际上我使用简单的 Conv2D 有一个不同的模型设置, Flatten、Dense 和 MaxPooling2D,所以如果这是问题所在,我需要做什么?【参考方案7】:

勾选的答案不正确;真正的问题更微妙。

当你创建 ModelCheckpoint() 时,检查最好的:

cp1 = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
print(cp1.best)

您会看到它设置为np.inf,不幸的是,这不是您停止训练时的最后一次最佳状态。因此,当您重新训练并重新创建 ModelCheckpoint() 时,如果您调用 fit 并且损失小于先前已知的值,那么它似乎可以工作,但在更复杂的问题中,您最终会保存一个错误的模型并且输掉最好的。

您可以通过覆盖cp.best 参数来解决此问题,如下所示:

import numpy as np
from numpy.testing import assert_allclose
from keras.models import Sequential, load_model
from keras.layers import LSTM, Dropout, Dense
from keras.callbacks import ModelCheckpoint

vec_size = 100
n_units = 10

x_train = np.random.rand(500, 10, vec_size)
y_train = np.random.rand(500, vec_size)

model = Sequential()
model.add(LSTM(n_units, input_shape=(None, vec_size), return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units, return_sequences=True))
model.add(Dropout(0.2))
model.add(LSTM(n_units))
model.add(Dropout(0.2))
model.add(Dense(vec_size, activation='linear'))
model.compile(loss='mean_squared_error', optimizer='adam')

# define the checkpoint
filepath = "model.h5"
cp1= ModelCheckpoint(filepath=filepath, monitor='loss',     save_best_only=True, verbose=1, mode='min')
callbacks_list = [cp1]

# fit the model
model.fit(x_train, y_train, epochs=5, batch_size=50, shuffle=True, validation_split=0.1, callbacks=callbacks_list)

# load the model
new_model = load_model(filepath)
#assert_allclose(model.predict(x_train),new_model.predict(x_train), 1e-5)
score = model.evaluate(x_train, y_train, batch_size=50)
cp1 = ModelCheckpoint(filepath, monitor='loss', verbose=1, save_best_only=True, mode='min')
cp1.best = score # <== ****THIS IS THE KEY **** See source for  ModelCheckpoint

# fit the model
callbacks_list = [cp1]
new_model.fit(x_train, y_train, epochs=5, batch_size=50, callbacks=callbacks_list)

【讨论】:

以上是关于Keras:如何保存模型并继续训练?的主要内容,如果未能解决你的问题,请参考以下文章

keras 如何保存训练集与验证集正确率的差最小那次epoch的网络及权重

如何在Keras中进行举重训练

如何减小 keras 保存模型的大小?

加载的 keras 模型无法继续训练,尺寸不匹配

恢复培训 tf.keras Tensorboard

如何将 keras 模型与其他数据一起保存并完全加载?