ValueError:未知损失函数:使用我的自定义损失函数加载模型时的focal_loss_fixed

Posted

技术标签:

【中文标题】ValueError:未知损失函数:使用我的自定义损失函数加载模型时的focal_loss_fixed【英文标题】:ValueError: Unknown loss function:focal_loss_fixed when loading model with my custom loss function 【发布时间】:2020-01-18 18:50:14 【问题描述】:

我设计了自己的损失函数。但是,当尝试恢复到使用

进行训练时遇到的最佳模型时
model = load_model("lc_model.h5")

我收到以下错误:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-105-9d09ef163b0a> in <module>
     23 
     24 # revert to the best model encountered during training
---> 25 model = load_model("lc_model.h5")

C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\saving.py in load_model(filepath, custom_objects, compile)
    417     f = h5dict(filepath, 'r')
    418     try:
--> 419         model = _deserialize_model(f, custom_objects, compile)
    420     finally:
    421         if opened_new_file:

C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\saving.py in _deserialize_model(f, custom_objects, compile)
    310                       metrics=metrics,
    311                       loss_weights=loss_weights,
--> 312                       sample_weight_mode=sample_weight_mode)
    313 
    314         # Set optimizer weights.

C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py in compile(self, optimizer, loss, metrics, loss_weights, sample_weight_mode, weighted_metrics, target_tensors, **kwargs)
    137             loss_functions = [losses.get(l) for l in loss]
    138         else:
--> 139             loss_function = losses.get(loss)
    140             loss_functions = [loss_function for _ in range(len(self.outputs))]
    141         self.loss_functions = loss_functions

C:\ProgramData\Anaconda3\lib\site-packages\keras\losses.py in get(identifier)
    131     if isinstance(identifier, six.string_types):
    132         identifier = str(identifier)
--> 133         return deserialize(identifier)
    134     if isinstance(identifier, dict):
    135         return deserialize(identifier)

C:\ProgramData\Anaconda3\lib\site-packages\keras\losses.py in deserialize(name, custom_objects)
    112                                     module_objects=globals(),
    113                                     custom_objects=custom_objects,
--> 114                                     printable_module_name='loss function')
    115 
    116 

C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\generic_utils.py in deserialize_keras_object(identifier, module_objects, custom_objects, printable_module_name)
    163             if fn is None:
    164                 raise ValueError('Unknown ' + printable_module_name +
--> 165                                  ':' + function_name)
    166         return fn
    167     else:

ValueError: Unknown loss function:focal_loss_fixed

这里是神经网络:

from keras.callbacks import ModelCheckpoint
from keras.models import load_model

model = create_model(x_train.shape[1], y_train.shape[1])

epochs =  35
batch_sz = 64

print("Beginning model training with batch size  and  epochs".format(batch_sz, epochs))

checkpoint = ModelCheckpoint("lc_model.h5", monitor='val_acc', verbose=0, save_best_only=True, mode='auto', period=1)

from keras.models import Sequential
from keras.layers import Dense, Dropout
from keras.constraints import maxnorm

def create_model(input_dim, output_dim):
    print(output_dim)
    # create model
    model = Sequential()
    # input layer
    model.add(Dense(100, input_dim=input_dim, activation='relu', kernel_constraint=maxnorm(3)))
    model.add(Dropout(0.2))

    # hidden layer
    model.add(Dense(60, activation='relu', kernel_constraint=maxnorm(3)))
    model.add(Dropout(0.2))

    # output layer
    model.add(Dense(output_dim, activation='softmax'))

    # Compile model
    # model.compile(loss='categorical_crossentropy', loss_weights=None, optimizer='adam', metrics=['accuracy'])
    model.compile(loss=focal_loss(alpha=1), loss_weights=None, optimizer='adam', metrics=['accuracy'])

    return model

# train the model
history = model.fit(x_train.as_matrix(),
                y_train.as_matrix(),
                validation_split=0.2,
                epochs=epochs,  
                batch_size=batch_sz, # Can I tweak the batch here to get evenly distributed data ?
                verbose=2,
                class_weight = weights, # class_weight tells the model to "pay more attention" to samples from an under-represented fraud class.
                callbacks=[checkpoint])

# revert to the best model encountered during training
model = load_model("lc_model.h5")

这是我的损失函数:

import tensorflow as tf

def focal_loss(gamma=2., alpha=4.):

    gamma = float(gamma)
    alpha = float(alpha)

    def focal_loss_fixed(y_true, y_pred):
        """Focal loss for multi-classification
        FL(p_t)=-alpha(1-p_t)^gammaln(p_t)
        Notice: y_pred is probability after softmax
        gradient is d(Fl)/d(p_t) not d(Fl)/d(x) as described in paper
        d(Fl)/d(p_t) * [p_t(1-p_t)] = d(Fl)/d(x)
        Focal Loss for Dense Object Detection
        https://arxiv.org/abs/1708.02002

        Arguments:
            y_true tensor -- ground truth labels, shape of [batch_size, num_cls]
            y_pred tensor -- model's output, shape of [batch_size, num_cls]

        Keyword Arguments:
            gamma float -- (default: 2.0)
            alpha float -- (default: 4.0)

        Returns:
            [tensor] -- loss.
        """
        epsilon = 1.e-9
        y_true = tf.convert_to_tensor(y_true, tf.float32)
        y_pred = tf.convert_to_tensor(y_pred, tf.float32)

        model_out = tf.add(y_pred, epsilon)
        ce = tf.multiply(y_true, -tf.log(model_out))
        weight = tf.multiply(y_true, tf.pow(tf.subtract(1., model_out), gamma))
        fl = tf.multiply(alpha, tf.multiply(weight, ce))
        reduced_fl = tf.reduce_max(fl, axis=1)
        return tf.reduce_mean(reduced_fl)
    return focal_loss_fixed

# model.compile(loss=focal_loss(alpha=1), optimizer='nadam', metrics=['accuracy'])
# model.fit(X_train, y_train, epochs=3, batch_size=1000)

【问题讨论】:

Loading model with custom loss + keras的可能重复 @TheGuywithTheHat 不幸的是,答案不是在问题的答案中添加custom_objects,这会产生另一个错误,但在加载模型时不会重新编译模型。 【参考方案1】:

你要加载focal_loss_fixed的custom_objects如下图:

model = load_model("lc_model.h5", custom_objects='focal_loss_fixed': focal_loss())

但是,如果您只想对模型进行推理而不是进一步优化或训练模型,您可以简单地忽略损失函数,如下所示:

model = load_model("lc_model.h5", compile=False)

【讨论】:

今天简单地添加compile=False为我节省了很多时间。以为我将不得不再次训练模型......谢谢:) 为我工作,但这背后的逻辑是什么?为什么这不起作用model =load_model('/resnet152.h5')【参考方案2】:

@Prasad 的回答很好,但我想补充一点解释和一点修正:

custom_objects 字典中提及您的自定义损失函数时,您不必调用您的损失函数,因为它可能会导致一些参数丢失错误。

# Instead of this
model = load_model("lc_model.h5", custom_objects='focal_loss_fixed': focal_loss())

# ty this without calling your loss function
model = load_model("lc_model.h5", custom_objects='focal_loss_fixed': focal_loss)

另外,我想在这里补充的是,您必须使用自定义损失函数的名称作为键,并将函数对象作为其在 custom_objects 中的值。我知道这是非常基本的,但要提一下,但我希望这对某人有所帮助。

【讨论】:

以上是关于ValueError:未知损失函数:使用我的自定义损失函数加载模型时的focal_loss_fixed的主要内容,如果未能解决你的问题,请参考以下文章

我的自定义损失函数是不是正确? (火炬)

ValueError:未知激活函数:my_custom_activation_function

用于三元组损失训练的自定义精度函数

涉及卷积的张量流中的自定义损失函数

图像分割 - Keras 中的自定义损失函数

需要内部层输出作为标签的自定义损失函数的 Keras 实现