在 tensorflow 2 中训练自定义多对一 RNN

Posted

技术标签:

【中文标题】在 tensorflow 2 中训练自定义多对一 RNN【英文标题】:Training a custom many-to-one RNN in tensorflow 2 【发布时间】:2021-03-31 18:23:36 【问题描述】:

我正在使用 tensorflow 2 实现自定义 RNN,为此我编写了一个模型,该模型采用无限数量的时间步长,并为所有时间步长获取最后一个隐藏层的输出,并应用一些 Dense 层给它。

现在,我的数据集包含一组形状为 [28207, 8, 2] 的训练示例(28207 个训练示例,8 个时间步长,2 个特征),我的输出是一个形状为 [28207, 2] 的矩阵(28207 个训练示例,2 个特征)但是在训练模型时出现以下错误:

Data cardinality is ambiguous:
x sizes: (then a lot of 8's)
y sizes: (then a lot of 2's)

我尝试将标签集的维度扩展到[28207, 1, 2],但没有成功,谷歌也没有太大帮助。

甚至可以在 tf2 中执行这种多对一的实现吗?

我在 python 3.6.12、windows 10、tensorflow 2.4.0 中使用 anaconda。单元格、模型和训练代码是这样的:

class RNNCell(keras.layers.Layer):
def __init__(self, units, **kwargs):
    self.units = units
    self.state_size = units
    super(TrayectoryRNNCell, self).__init__(**kwargs)

def build(self, input_shape):
    # i computation
    self.Wxi = self.add_weight(name='Wxi', shape=(input_shape[0][-1], self.units), initializer="random_normal", regularizer=customL2Regularizer)
    self.Whi = self.add_weight(name='Whi', shape=(self.units, self.units), initializer="random_normal", regularizer=customL2Regularizer)
    self.Wci = self.add_weight(name='Wci', shape=(self.units, self.units), initializer="random_normal", regularizer=customL2Regularizer)
    self.bi = self.add_weight(name='bi', shape=(self.units, ), initializer="zeros", regularizer=customL2Regularizer)

    # f computation
    self.Wxf = self.add_weight(name='Wxf', shape=(input_shape[0][-1], self.units), initializer="random_normal", regularizer=customL2Regularizer)
    self.Whf = self.add_weight(name='Whf', shape=(self.units, self.units), initializer="random_normal", regularizer=customL2Regularizer)
    self.Wcf = self.add_weight(name='Wcf', shape=(self.units, self.units), initializer="random_normal", regularizer=customL2Regularizer)
    self.bf = self.add_weight(name='bf', shape=(self.units, ), initializer="zeros", regularizer=customL2Regularizer)

    # c computation
    self.Wxc = self.add_weight(name='Wxc', shape=(input_shape[0][-1], self.units), initializer="random_normal", regularizer=customL2Regularizer)
    self.Whc = self.add_weight(name='Whc', shape=(self.units, self.units), initializer="random_normal", regularizer=customL2Regularizer)
    self.bc = self.add_weight(name='bc', shape=(self.units, ), initializer="zeros", regularizer=customL2Regularizer)

    # o computation
    self.Wxo = self.add_weight(name='Wxo', shape=(input_shape[0][-1], self.units), initializer="random_normal", regularizer=customL2Regularizer)
    self.Who = self.add_weight(name='Who', shape=(self.units, self.units), initializer="random_normal", regularizer=customL2Regularizer)
    self.Wco = self.add_weight(name='Wco', shape=(self.units, self.units), initializer="random_normal", regularizer=customL2Regularizer)
    self.bo = self.add_weight(name='bo', shape=(self.units, ), initializer="zeros", regularizer=customL2Regularizer)

def call(self, inputs, states):
    # It expects two inputs: the X and the previous h
    i = tf.math.sigmoid(K.dot(inputs[0], self.Wxi) + K.dot(inputs[1], self.Whi) + K.dot(states[0], self.Wci) + self.bi)
    f = tf.math.sigmoid(K.dot(inputs[0], self.Wxf) + K.dot(inputs[1], self.Whf) + K.dot(states[0], self.Wcf) + self.bf)
    c = f * states[0] + i * tf.math.tanh(K.dot(inputs[0], self.Wxc) + K.dot(inputs[1], self.Whc) + self.bc)
    o = tf.math.sigmoid(K.dot(inputs[0], self.Wxo) + K.dot(inputs[1], self.Who) + K.dot(c, self.Wco) + self.bo)
    return o * tf.tanh(c), c

网络:

rnn_hidden_units = 128
rnn_hidden_layers = 2
lstm_outputs = []

# Inputs: [None, time_steps, 2]
inputs = keras.Input(shape=(time_steps, 2), name='inputs')

# First hidden layer previous h: [None, time_steps, 2]
zeros_placeholder = tf.fill(tf.stack([tf.shape(inputs)[0], time_steps, rnn_hidden_units]), 0.0, name='zeros_placeholder')

# First hidden layer: inputs, zeros_placeholder => [None, time_steps, rnn_hidden_units]
last_hidden_output = RNN(RNNCell(rnn_hidden_units), return_sequences=True, name='first_rnn_layer')((inputs, zeros_placeholder))

# Append last output to a list
lstm_outputs.append(last_hidden_output[:, -1, :])

# The rest of the hidden layers
for l in range(rnn_hidden_layers - 1):
    last_hidden_output = RNN(RNNCell(rnn_hidden_units), return_sequences=True, name='_rnn_layer'.format(l+1))((inputs, last_hidden_output))
    lstm_outputs.append(last_hidden_output[:, -1, :])

# Compute p_t+1 (assuming Y is the sigmoid function): [None, 5]
p = tf.sigmoid(OutputLayer(rnn_hidden_units)(tf.stack(lstm_outputs)))

# Compute (mu, sigma, rho): [None, 5]
output = OutputLayer(5, include_bias=False)(p)

# Define the model
model = keras.models.Model(inputs=inputs, outputs=output)

失败的代码:

model.compile(optimizer=tf.keras.optimizers.RMSprop(learning_rate=0.001, rho=0.95), loss=bivariate_loss_function, metrics=['val_loss'])

# Define the Keras TensorBoard callback.
logdir="./logs/fit/" + datetime.datetime.now().strftime("%Y%m%d-%H%M%S")
tensorboard_callback = keras.callbacks.TensorBoard(log_dir=logdir)

# Train the model.
model.fit(training_examples,
          training_labels,
          batch_size=64,
          epochs=5,
          callbacks=[tensorboard_callback])

【问题讨论】:

为什么不发布您的错误日志? @DachuanZhao 因为问题正文太长了 @YamilEssus 您需要发布您认为相关的行。 【参考方案1】:

原来是输入的问题,因为它是一个列表而不是一个 numpy 数组。

【讨论】:

以上是关于在 tensorflow 2 中训练自定义多对一 RNN的主要内容,如果未能解决你的问题,请参考以下文章

resultMap自定义映射(多对一)

自定义多对一经理

spring-boot-mybatis-plus-layui 自定义代码生成完整多对一

spring-boot-mybatis-plus-layui 自定义代码生成完整多对一

TensorFlow 多对一 RNN 时间序列

tensorflow2.0(2)-自定义Dense层以及训练过程