Keras 自定义 lambda 层:如何规范化/缩放输出
Posted
技术标签:
【中文标题】Keras 自定义 lambda 层:如何规范化/缩放输出【英文标题】:Keras custom lambda layer: how to normalize / scale the output 【发布时间】:2021-02-02 17:12:38 【问题描述】:我正在努力扩展 lambda 层的输出。代码如下: 我的 X_train 是 100*15*24,Y_train 是 100*1(网络由 LSTM 层 + Dense 层组成)
input_shape=(timesteps, num_feat)
data_input = Input(shape=input_shape, name="input_layer")
lstm1 = LSTM(10, name="lstm_layer")(data_input)
dense1 = Dense(4, activation="relu", name="dense1")(lstm1)
dense2 = Dense(1, activation = "custom_activation_1", name = "dense2")(dense1)
dense3 = Dense(1, activation = "custom_activation_2", name = "dense3")(dense1)
#dense 2 and 3 has customed activation function with range the REAL LINE (so I need to normalize it)
## custom lambda layer/ loss function ##
def custom_layer(new_input):
add_input = new_input[0]+new_input[1]
#below three lines are where problem occurs that makes the program does not work
###############################################
scaler = MinMaxScaler()
scaler.fit(add_input)
normalized = scaler.transform(add_input)
###############################################
return normalized
lambda_layer = Lambda(custom_layer, name="lambda_layer")([dense2, dense3])
model = Model(inputs=data_input, outputs=lambda_layer)
model.compile(loss='mse', optimizer='adam',metrics=['accuracy'])
model.fit(X_train, Y_train, epochs=2, batch_size=216)
如何正确规范 lambda_layer 的输出?任何想法或建议表示赞赏!
【问题讨论】:
【参考方案1】:我认为 Scikit 转换器不能在 Lambda 层中工作。如果您只对传入数据的标准化输出感兴趣,您可以这样做,
from tensorflow.keras.layers import Input, LSTM, Dense, Lambda
from tensorflow.keras.models import Model
import tensorflow as tf
timesteps = 3
num_feat = 12
input_shape=(timesteps, num_feat)
data_input = Input(shape=input_shape, name="input_layer")
lstm1 = LSTM(10, name="lstm_layer")(data_input)
dense1 = Dense(4, activation="relu", name="dense1")(lstm1)
dense2 = Dense(1, activation = "custom_activation_1", name = "dense2")(dense1)
dense3 = Dense(1, activation = "custom_activation_2", name = "dense3")(dense1)
#dense 2 and 3 has customed activation function with range the REAL LINE (so I need to normalize it)
## custom lambda layer/ loss function ##
def custom_layer(new_input):
add_input = new_input[0]+new_input[1]
normalized = (add_input - tf.reduce_min(add_input, axis=0, keepdims=True))/(tf.reduce_max(add_input, axis=0, keepdims=True) - tf.reduce_max(add_input, axis=0, keepdims=True))
return normalized
lambda_layer = Lambda(custom_layer, name="lambda_layer")([dense2, dense3])
model = Model(inputs=data_input, outputs=lambda_layer)
model.compile(loss='mse', optimizer='adam',metrics=['accuracy'])
【讨论】:
非常感谢! @thushv89 它有效!我可以问你一个后续问题:使用 tf.reduce_max 的原因是“add_input”是一个二维数组,所以我不能只使用应该应用于一维输入的 min()?所以当我们想对维度大于一的数组应用 min/max 时,我们应该使用 tf.reduce_min/max,这样对吗? (顺便说一句,我认为有一个错字 - “规范化”的分母部分应该是 tf.reduce_max - tf.reduce_min) @Doi_Ann,您可以在 1D 向量、多轴 nD 向量或单轴 nD 向量上使用 reduce_min/max。它用途广泛。我使用 axis=0 的原因是因为 Scikit minmax 缩放器就是这样做的。 scikit-learn.org/stable/modules/generated/…以上是关于Keras 自定义 lambda 层:如何规范化/缩放输出的主要内容,如果未能解决你的问题,请参考以下文章
Keras:如何在编译期间输入形状未知时创建带权重的自定义图层?
如何在 Tensorflow 2.x Keras 自定义层中使用多个输入?