ValueError: Shapes must be equal rank, but are 1 and 0 从将形状 1 与其他形状合并。对于“损失/添加”

Posted

技术标签:

【中文标题】ValueError: Shapes must be equal rank, but are 1 and 0 从将形状 1 与其他形状合并。对于“损失/添加”【英文标题】:ValueError: Shapes must be equal rank, but are 1 and 0 From merging shape 1 with other shapes. for 'loss/AddN' 【发布时间】:2020-12-04 10:46:42 【问题描述】:

我正在尝试使用 tensorflow 创建一个变分自动编码器。我已经按照 keras 网站 (https://keras.io/guides/making_new_layers_and_models_via_subclassing/) 执行了所有步骤 不过我做了一些小改动。

annealing_weight = tf.keras.backend.variable(0.01)

test = VariationalAutoEncoder(annealing_weight,
                              [8, 8, 128],
                              input_shape=(None, 256, 256, 1))
test.compile('adam', loss=None)
test.summary()
test.train_on_batch(np.random.randn(32, 256, 256, 1),None)

我能够编译网络并获得摘要。一切似乎都很正常。 但是,当我尝试批量训练以查看网络是否正常工作时,我收到以下错误消息。问题似乎出在错误函数上。

我希望有人可以帮助我。谢谢!

WARNING:tensorflow:AutoGraph could not transform <bound method ConvolutionalBlock.call of <__main__.ConvolutionalBlock object at 0x000000001D5DC408>> and will run it as-is.
Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output.
Cause: Unable to locate the source code of <bound method ConvolutionalBlock.call of <__main__.ConvolutionalBlock object at 0x000000001D5DC408>>. Note that functions defined in certain environments, like the interactive Python shell do not expose their source code. If that is the case, you should to define them in a .py source file. If you are certain the code is graph-compatible, wrap the call using @tf.autograph.do_not_convert. Original error: could not get source code
WARNING:tensorflow:Output output_1 missing from loss dictionary. We assume this was done on purpose. The fit and evaluate APIs will not be expecting any data to be passed to output_1.
Traceback (most recent call last):
  File "C:\Users\user\.conda\envs\Thesis\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1619, in _create_c_op
    c_op = c_api.TF_FinishOperation(op_desc)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Shapes must be equal rank, but are 1 and 0
    From merging shape 1 with other shapes. for 'loss_1/AddN' (op: 'AddN') with input shapes: [?], [?], [], [], [], [], [], [], [], [], [], [], [], [].
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "<input>", line 10, in <module>
  File "C:\Users\user\.conda\envs\Thesis\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 1078, in train_on_batch
    standalone=True)
  File "C:\Users\user\.conda\envs\Thesis\lib\site-packages\tensorflow_core\python\keras\engine\training_v2_utils.py", line 416, in train_on_batch
    extract_tensors_from_dataset=True)
  File "C:\Users\user\.conda\envs\Thesis\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2360, in _standardize_user_data
    self._compile_from_inputs(all_inputs, y_input, x, y)
  File "C:\Users\user\.conda\envs\Thesis\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 2618, in _compile_from_inputs
    experimental_run_tf_function=self._experimental_run_tf_function)
  File "C:\Users\user\.conda\envs\Thesis\lib\site-packages\tensorflow_core\python\training\tracking\base.py", line 457, in _method_wrapper
    result = method(self, *args, **kwargs)
  File "C:\Users\user\.conda\envs\Thesis\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 446, in compile
    self._compile_weights_loss_and_weighted_metrics()
  File "C:\Users\user\.conda\envs\Thesis\lib\site-packages\tensorflow_core\python\training\tracking\base.py", line 457, in _method_wrapper
    result = method(self, *args, **kwargs)
  File "C:\Users\user\.conda\envs\Thesis\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 1592, in _compile_weights_loss_and_weighted_metrics
    self.total_loss = self._prepare_total_loss(masks)
  File "C:\Users\user\.conda\envs\Thesis\lib\site-packages\tensorflow_core\python\keras\engine\training.py", line 1701, in _prepare_total_loss
    math_ops.add_n(custom_losses))
  File "C:\Users\user\.conda\envs\Thesis\lib\site-packages\tensorflow_core\python\util\dispatch.py", line 180, in wrapper
    return target(*args, **kwargs)
  File "C:\Users\user\.conda\envs\Thesis\lib\site-packages\tensorflow_core\python\ops\math_ops.py", line 3053, in add_n
    return gen_math_ops.add_n(inputs, name=name)
  File "C:\Users\user\.conda\envs\Thesis\lib\site-packages\tensorflow_core\python\ops\gen_math_ops.py", line 420, in add_n
    "AddN", inputs=inputs, name=name)
  File "C:\Users\user\.conda\envs\Thesis\lib\site-packages\tensorflow_core\python\framework\op_def_library.py", line 742, in _apply_op_helper
    attrs=attr_protos, op_def=op_def)
  File "C:\Users\user\.conda\envs\Thesis\lib\site-packages\tensorflow_core\python\framework\func_graph.py", line 595, in _create_op_internal
    compute_device)
  File "C:\Users\user\.conda\envs\Thesis\lib\site-packages\tensorflow_core\python\framework\ops.py", line 3322, in _create_op_internal
    op_def=op_def)
  File "C:\Users\user\.conda\envs\Thesis\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1786, in __init__
    control_input_ops)
  File "C:\Users\user\.conda\envs\Thesis\lib\site-packages\tensorflow_core\python\framework\ops.py", line 1622, in _create_c_op
    raise ValueError(str(e))
ValueError: Shapes must be equal rank, but are 1 and 0
    From merging shape 1 with other shapes. for 'loss_1/AddN' (op: 'AddN') with input shapes: [?], [?], [], [], [], [], [], [], [], [], [], [], [], [].

代码如下所示。

import numpy as np
import tensorflow as tf

from tensorflow.keras import layers as tfl

class ConvolutionalBlock(tfl.Layer):

    def __init__(self, filters, name, deconv=False, **kwargs):

        self.conv_layer = tfl.Conv2DTranspose if deconv else tfl.Conv2D
        self.conv_layer = self.conv_layer(filters,
                                          kernel_size=3,
                                          padding='same',
                                          kernel_initializer='he_normal',
                                          kernel_regularizer=tf.keras.regularizers.l2(0.0001),
                                          strides=2,
                                          name='conv')

        self.batch_norm = tfl.BatchNormalization(name='de_ennorm')
        self.relu = tfl.ReLU(name='en_relu')# + str(index))

        super(ConvolutionalBlock, self).__init__(name=name, **kwargs)

    def call(self, inputs, **kwargs):
        outputs = self.conv_layer(inputs)
        outputs = self.batch_norm(outputs)
        outputs = self.relu(outputs)
        return outputs

class Sampling(tfl.Layer):

    def __init__(self, **kwargs):
        super(Sampling, self).__init__(name='reparameterization_trick', **kwargs)

    def call(self, inputs, training=None, mask=None, **kwargs):
        x_mean, x_variance = inputs

        return x_mean + tf.keras.backend.exp(0.5 * x_variance) * \
                   tf.keras.backend.random_normal(shape=(32, 128), mean=0., stddev=1.0)


class Encoder(tfl.Layer):

    def __init__(self, **kwargs):
        super(Encoder, self).__init__(name='Encoder', **kwargs)

        self.convs = [
            ConvolutionalBlock(8, 'conv1'),
            ConvolutionalBlock(16, 'conv2'),
            ConvolutionalBlock(32, 'conv3'),
            ConvolutionalBlock(64, 'conv4'),
            ConvolutionalBlock(128, 'conv5')
        ]

        self.features = tfl.GlobalAveragePooling2D(name='globaverpool')
        self.denserepresentation = tfl.Dense(128, activation='relu', name='Dense1')

        self.x_mean = tfl.Dense(128, name='meanvector')
        self.x_variance = tfl.Dense(128, name='variancevector')

        self.sampling = Sampling()


    def call(self, inputs, training=None, mask=None, **kwargs):
        outputs = inputs
        print(outputs)

        for layer in self.convs:
            outputs = layer(outputs)
            print(outputs)

        outputs = self.features(outputs)
        print(outputs)
        dense_output = self.denserepresentation(outputs)
        print(dense_output)
        x_mean = self.x_mean(dense_output)
        x_variance = self.x_variance(dense_output)
        output = self.sampling((x_mean,x_variance))

        return output, x_mean, x_variance


class Decoder(tfl.Layer):

    def __init__(self,
                 dense_reshape,
                 **kwargs):

        super(Decoder, self).__init__(name='Decoder', **kwargs)

        self.denserepresentation = tfl.Dense(np.prod(dense_reshape),
                                             activation='relu',
                                             kernel_regularizer=tf.keras.regularizers.l2(0.0001),
                                             name='dense2')
        self.reshaped = tfl.Reshape(dense_reshape,
                                    name='reshape')

        self.deconvs=[
            ConvolutionalBlock(128, 'conv1', deconv=True),
            ConvolutionalBlock(64, 'conv2', deconv=True),
            ConvolutionalBlock(32, 'conv3', deconv=True),
            ConvolutionalBlock(16, 'conv4', deconv=True),
            ConvolutionalBlock(8, 'conv5', deconv=True)
        ]

        self.output_layer = tfl.Conv2D(filters=1,
                                       kernel_size=3,
                                       activation='sigmoid', # check this
                                       padding='same',
                                       name='decodedconv',
                                       kernel_regularizer=tf.keras.regularizers.l2(0.0001),
                                       kernel_initializer='he_normal')

    def call(self, inputs, training=None, mask=None):
        outputs = inputs
        outputs = self.denserepresentation(outputs)
        outputs = self.reshaped(outputs)

        for layer in self.deconvs:
            outputs = layer(outputs)

        outputs = self.output_layer(outputs)

        return outputs


class VariationalAutoEncoder(tf.keras.Model):

    def __init__(self,
                 annealing_weight,
                 dense_reshape,
                 input_shape,
                 **kwargs):
        super(VariationalAutoEncoder, self).__init__(**kwargs)

        self.annealing_weight = annealing_weight  # for KL-loss

        self.encoder = Encoder()
        self.decoder = Decoder(dense_reshape)

        self.build(input_shape)



    def call(self, inputs, training=None, mask=None):
        dense_output, x_mean, x_variance = self.encoder(inputs)
        output = self.decoder(dense_output)

        kl_loss = - self.annealing_weight * tf.reduce_mean(
            x_variance - tf.keras.backend.square(x_mean)
            - tf.keras.backend.exp(x_variance) + 1,
            axis=-1)
        self.add_loss(lambda: kl_loss)
        return output

【问题讨论】:

【参考方案1】:

正如错误消息所示,tf.math.add_n 函数的输入张量具有不同的等级。下面我重新创建了您的错误 -

重现错误的代码 -

%tensorflow_version 1.x
import tensorflow as tf

a = tf.constant([[3, 5], [4, 8]])
b = tf.constant([[[1, 6]], [[2, 9]]])
tf.math.add_n([a, b, a])

输出 -

TensorFlow 1.x selected.
---------------------------------------------------------------------------
InvalidArgumentError                      Traceback (most recent call last)
/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs)
   1606   try:
-> 1607     c_op = c_api.TF_FinishOperation(op_desc)
   1608   except errors.InvalidArgumentError as e:

InvalidArgumentError: Shapes must be equal rank, but are 3 and 2
    From merging shape 1 with other shapes. for 'AddN' (op: 'AddN') with input shapes: [2,2], [2,1,2], [2,2].

During handling of the above exception, another exception occurred:

ValueError                                Traceback (most recent call last)
9 frames
/tensorflow-1.15.2/python3.6/tensorflow_core/python/framework/ops.py in _create_c_op(graph, node_def, inputs, control_inputs)
   1608   except errors.InvalidArgumentError as e:
   1609     # Convert to ValueError for backwards compatibility.
-> 1610     raise ValueError(str(e))
   1611 
   1612   return c_op

ValueError: Shapes must be equal rank, but are 3 and 2
    From merging shape 1 with other shapes. for 'AddN' (op: 'AddN') with input shapes: [2,2], [2,1,2], [2,2].

注意 - tensorflow 2.x

中的错误消息措辞不同

要修复此错误,请将相同等级的张量传递给 tf.math.add_n 函数。

固定代码 -

%tensorflow_version 1.x
import tensorflow as tf

a = tf.constant([[3, 5], [4, 8]])
b = tf.constant([[1, 6], [2, 9]])
tf.math.add_n([a, b, a])

输出 -

<tf.Tensor 'AddN_1:0' shape=(2, 2) dtype=int32>

【讨论】:

以上是关于ValueError: Shapes must be equal rank, but are 1 and 0 从将形状 1 与其他形状合并。对于“损失/添加”的主要内容,如果未能解决你的问题,请参考以下文章

Colab 中的 TensorFlow 错误 - ValueError: Shapes (None, 1) 和 (None, 10) 不兼容

Tensorflow 维度问题:ValueError: Shapes (3, 1) and (None, 3) is incompatible

为啥 Numpy 会抛出这个错误 ValueError: operands could not be broadcast together with shapes (3,0) (128,)

如何修复'ValueError:shapes(1,3)和(1,1)未对齐:3(dim 1)!= 1(dim 0)'numpy中的错误

如何解决 raise ValueError("columns must have matching element counts") ValueError: columns mus

ValueError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]] -