ValueError: 层 lstm_21 的输入 0 与层不兼容:预期 ndim=3,发现 ndim=2。收到的完整形状:(无,546)

Posted

技术标签:

【中文标题】ValueError: 层 lstm_21 的输入 0 与层不兼容:预期 ndim=3,发现 ndim=2。收到的完整形状:(无,546)【英文标题】:ValueError: Input 0 of layer lstm_21 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 546) 【发布时间】:2021-08-25 08:32:15 【问题描述】:

这是我写的代码:

from keras.callbacks import History 
history = History()
# Create model - 3 layers. First layer 128 neurons, second layer 64 neurons and 3rd output layer contains number of neurons
# equal to number of intents to predict output intent with softmax
model = Sequential()
model.add(LSTM(128, input_shape=(len(train_x[0]),), return_sequences=False, activation="tanh"))
model.add(Dropout(0.2))
model.add(Dense(32, activation = "relu"))
model.add(Dropout(0.5))
model.add(Dense(len(train_y[0]), activation = "softmax"))

我得到的错误是: 在这条线上: model.add(LSTM(128, input_shape=(len(train_x[0]),), return_sequences=False, activation="tanh")) 我的 train_x 形状是 (398, 546) 我的 train_y 形状是 (398, 87) 有人对我做错了什么有任何想法吗?谢谢!

ValueError                                Traceback (most recent call last)
<ipython-input-82-1d1d56cc5875> in <module>
      4 # equal to number of intents to predict output intent with softmax
      5 model = Sequential()
----> 6 model.add(LSTM(128, input_shape=(len(train_x[0]),), return_sequences=False, activation="tanh"))
      7 model.add(Dropout(0.2))
      8 model.add(Dense(32, activation = "relu"))

~\Anaconda3\lib\site-packages\tensorflow\python\training\tracking\base.py in _method_wrapper(self, *args, **kwargs)
    515     self._self_setattr_tracking = False  # pylint: disable=protected-access
    516     try:
--> 517       result = method(self, *args, **kwargs)
    518     finally:
    519       self._self_setattr_tracking = previous_value  # pylint: disable=protected-access

~\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\sequential.py in add(self, layer)
    206           # and create the node connecting the current layer
    207           # to the input layer we just created.
--> 208           layer(x)
    209           set_inputs = True
    210 

~\Anaconda3\lib\site-packages\tensorflow\python\keras\layers\recurrent.py in __call__(self, inputs, initial_state, constants, **kwargs)
    658 
    659     if initial_state is None and constants is None:
--> 660       return super(RNN, self).__call__(inputs, **kwargs)
    661 
    662     # If any of `initial_state` or `constants` are specified and are Keras

~\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in __call__(self, *args, **kwargs)
    950     if _in_functional_construction_mode(self, inputs, args, kwargs, input_list):
    951       return self._functional_construction_call(inputs, args, kwargs,
--> 952                                                 input_list)
    953 
    954     # Maintains info about the `Layer.call` stack.

~\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in _functional_construction_call(self, inputs, args, kwargs, input_list)
   1089         # Check input assumptions set after layer building, e.g. input shape.
   1090         outputs = self._keras_tensor_symbolic_call(
-> 1091             inputs, input_masks, args, kwargs)
   1092 
   1093         if outputs is None:

~\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in _keras_tensor_symbolic_call(self, inputs, input_masks, args, kwargs)
    820       return nest.map_structure(keras_tensor.KerasTensor, output_signature)
    821     else:
--> 822       return self._infer_output_signature(inputs, args, kwargs, input_masks)
    823 
    824   def _infer_output_signature(self, inputs, args, kwargs, input_masks):

~\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in _infer_output_signature(self, inputs, args, kwargs, input_masks)
    860           # overridden).
    861           # TODO(kaftan): do we maybe_build here, or have we already done it?
--> 862           self._maybe_build(inputs)
    863           outputs = call_fn(inputs, *args, **kwargs)
    864 

~\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\base_layer.py in _maybe_build(self, inputs)
   2683     if not self.built:
   2684       input_spec.assert_input_compatibility(
-> 2685           self.input_spec, inputs, self.name)
   2686       input_list = nest.flatten(inputs)
   2687       if input_list and self._dtype_policy.compute_dtype is None:

~\Anaconda3\lib\site-packages\tensorflow\python\keras\engine\input_spec.py in assert_input_compatibility(input_spec, inputs, layer_name)
    221                          'expected ndim=' + str(spec.ndim) + ', found ndim=' +
    222                          str(ndim) + '. Full shape received: ' +
--> 223                          str(tuple(shape)))
    224     if spec.max_ndim is not None:
    225       ndim = x.shape.rank
ValueError: Input 0 of layer lstm_21 is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 546)

【问题讨论】:

你不应该通过 len(train_x[0]) 。将其更改为train_x[0].shape 感谢您的回答,但我遇到了同样的错误。 【参考方案1】:

LSTM 期望 input_shape 为 (batch_size, timesteps, input_dim)。正如 Keras 所期望的那样,(timesteps, input_dim)。 在这种情况下,您有 input_shape=(len(train_x[0]),)

添加具有价值的额外维度将解决您的问题。

model.add(LSTM(128, input_shape=(1, len(train_x[0])), return_sequences=False, activation="tanh"))

【讨论】:

感谢您的回答,但我遇到了同样的错误。 你能分享你的样本数据集吗? 我的数据集就像这篇文章 [Google] (data-flair.training/blogs/python-chatbot-project)。我将这些步骤称为文章。但是当添加 LSTM 层时它不起作用,即使我改变了输入形状

以上是关于ValueError: 层 lstm_21 的输入 0 与层不兼容:预期 ndim=3,发现 ndim=2。收到的完整形状:(无,546)的主要内容,如果未能解决你的问题,请参考以下文章

ValueError:检查输入时出错:预期 lstm_16_input 有 3 个维度,但得到的数组形状为 (836, 400, 3, 1)

ValueError:输入 0 与层 lstm_13 不兼容:预期 ndim=3,发现 ndim=4

ValueError:输入 0 与层 lstm_1 不兼容:预期 ndim=3,发现 ndim=2 [keras]

ValueError:检查输入时出错:预期 lstm_1_input 具有 3 个维度,但得到的数组具有形状 (393613, 50)

ValueError:lstm_45 层的输入 0 与层不兼容:预期 ndim=3,发现 ndim=4。收到的完整形状:(无,无,无,128)

Keras 功能 api 输入形状错误,lstm 层收到 2d 而不是 3d 形状