Colab 中的 TensorFlow 错误 - ValueError: Shapes (None, 1) 和 (None, 10) 不兼容

Posted

技术标签:

【中文标题】Colab 中的 TensorFlow 错误 - ValueError: Shapes (None, 1) 和 (None, 10) 不兼容【英文标题】:Tensorflow error in Colab - ValueError: Shapes (None, 1) and (None, 10) are incompatible 【发布时间】:2020-10-08 11:08:02 【问题描述】:

我正在尝试使用 MNIST 数据集为 NN 执行一个小代码以进行字符识别。当涉及到拟合线时,我得到 ValueError: Shapes (None, 1) and (None, 10) is incompatible

import numpy as np

#Install Tensor Flow
try:
  #Tensorflow_version solo existe en Colab
  %tensorflow_version 2.x

except Exception:
  pass

import tensorflow as tf

tf.__version__

mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()

print(x_train.shape)
print(x_test.shape)
print(y_train.shape)
print(y_test.shape)
print(np.unique(y_train))
print(np.unique(y_test))

import matplotlib.pyplot as plt
plt.imshow(x_train[0], cmap='Greys');

y_train[0]

x_train, x_test = x_train / 255.0, x_test / 255.0
x_train.shape

model = tf.keras.Sequential([
                           tf.keras.layers.Flatten(input_shape=(28, 28)),
                           tf.keras.layers.Dense(units=512, activation='relu'),
                           tf.keras.layers.Dense(units=10, activation='softmax')
])
model.summary()
model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
h = model.fit(x_train, y_train, epochs=10, batch_size=256)

最后一行出现错误,例如 x_train 和 y_train 的大小是否不同。但是 X_train 是 60000x28x28 而 y_train 是 60000x1


Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten (Flatten)            (None, 784)               0         
_________________________________________________________________
dense (Dense)                (None, 512)               401920    
_________________________________________________________________
dense_1 (Dense)              (None, 10)                5130      
=================================================================
Total params: 407,050
Trainable params: 407,050
Non-trainable params: 0
_________________________________________________________________
Epoch 1/10
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-10-50705bca2031> in <module>()
      6 model.summary()
      7 model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])
----> 8 h = model.fit(x_train, y_train, epochs=10, batch_size=256)

10 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
    966           except Exception as e:  # pylint:disable=broad-except
    967             if hasattr(e, "ag_error_metadata"):
--> 968               raise e.ag_error_metadata.to_exception(e)
    969             else:
    970               raise

ValueError: in user code:

    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:571 train_function  *
        outputs = self.distribute_strategy.run(
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:951 run  **
        return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica
        return self._call_for_each_replica(fn, args, kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica
        return fn(*args, **kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:533 train_step  **
        y, y_pred, sample_weight, regularization_losses=self.losses)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/compile_utils.py:205 __call__
        loss_value = loss_obj(y_t, y_p, sample_weight=sw)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:143 __call__
        losses = self.call(y_true, y_pred)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:246 call
        return self.fn(y_true, y_pred, **self._fn_kwargs)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/losses.py:1527 categorical_crossentropy
        return K.categorical_crossentropy(y_true, y_pred, from_logits=from_logits)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/backend.py:4561 categorical_crossentropy
        target.shape.assert_is_compatible_with(output.shape)
    /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_shape.py:1117 assert_is_compatible_with
        raise ValueError("Shapes %s and %s are incompatible" % (self, other))

    ValueError: Shapes (None, 1) and (None, 10) are incompatible

【问题讨论】:

请提出具体问题。您已经付出了努力,但不清楚您在问什么(除了显示错误消息)。 【参考方案1】:

在将y_train 向量传递给fit 方法之前,您需要对其进行一次热编码。您可以使用以下代码来做到这一点:

from keras.utils import to_categorical

# make the model and load the training dataset.

y_train = to_categorical(y_train)

# call the fit method.

【讨论】:

谢谢,我尝试了 loss='sparse_categorical_crossentropy' 并且成功了,现在我将尝试您的解决方案来看看有什么不同!【参考方案2】:

问题就在这里:

model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy'])

损失,categorical_crossentropy 期望类的 one-hot 编码向量,如 here 所述。但是,您的标签不是一种热编码。 在这种情况下,最简单的解决方案是使用loss='sparse_categorical_crossentropy',因为您的标签很稀疏。

【讨论】:

谢谢,我试过了,效果很好。我忘了这是一个稀疏矩阵。我应该试试热编码看看它是如何工作的。【参考方案3】:

我没有尝试过您的代码,但通常这些错误来自不正确的索引。我的意思是,你的最后一层不适合你的输出或类似的东西。 我遇到了同样的问题,我就是这样解决的:

number_of_outputs = 10
# 10 is an example, you need to know how many outputs you have in your dataset
model.add(Dense(number_of_outputs, activation='softmax'))

一个例子是:

model = Sequential()
model.add(Dense(16, input_shape=(X.shape[1],), activation='relu'))
model.add(Dense(10, activation='softmax'))
# where 10 is my number of outputs in my dataset
model.summary()

希望我解决了你的问题

【讨论】:

以上是关于Colab 中的 TensorFlow 错误 - ValueError: Shapes (None, 1) 和 (None, 10) 不兼容的主要内容,如果未能解决你的问题,请参考以下文章

TensorFlow 1 中的 TensorBoard 使用 Google Colab

如何清除 Colab Tensorflow TPU 内存

Colab 提供 OOM 用于在 tensorflow 中在 GPU 上分配 4.29 GB 张量

如何在谷歌 colab 上恢复到默认的 tensorflow 版本

谷歌 colab 中用于 tensorflow-1.x 的 Tensorboard

TensorFlow Same Model 在 Colab 和本地返回不同的结果