Tensorflow 2.1.0 - 函数构建代码之外的操作正在传递一个“图形”张量
Posted
技术标签:
【中文标题】Tensorflow 2.1.0 - 函数构建代码之外的操作正在传递一个“图形”张量【英文标题】:Tensorflow 2.1.0 - An op outside of the function building code is being passed a "Graph" tensor 【发布时间】:2020-06-18 10:14:07 【问题描述】:我正在尝试实施最近的一篇论文。此实现的一部分涉及从 tf 1.14 迁移到 tf 2.1.0。该代码与 tf 1.14 一起工作,但不再工作。
注意:如果我禁用 Eager Execution tf.compat.v1.disable_eager_execution()
,那么代码将按预期工作。
这是解决方案吗?我之前在 TF 2.x 中制作了很多模型,并且从来不需要禁用 Eager Execution 来实现正常功能。
我已将问题提炼成一个非常简短的要点,说明正在发生的事情。
链接和代码在前,然后是详细的错误消息
链接到 Gist -- https://gist.github.com/darien-schettler/fd5b25626e9eb5b1330cce670bf9cc17
代码
# version 2.1.0
import tensorflow as tf
# version 1.18.1
import numpy as np
# ######## DEFINE CUSTOM FUNCTION FOR TF LAMBDA LAYER ######## #
def resize_like(input_tensor, ref_tensor):
""" Resize an image tensor to the same size/shape as a reference image tensor
Args:
input_tensor : (image tensor) Input image tensor that will be resized
ref_tensor : (image tensor) Reference image tensor that we want to resize the input tensor to.
Returns:
reshaped tensor
"""
reshaped_tensor = tf.image.resize(images=input_tensor,
size=tf.shape(ref_tensor)[1:3],
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR,
preserve_aspect_ratio=False,
antialias=False,
name=None)
return reshaped_tensor
# ############################################################# #
# ############ DEFINE MODEL USING TF.KERAS FN API ############ #
# INPUTS
model_input_1 = tf.keras.layers.Input(shape=(160,160,3))
model_input_2 = tf.keras.layers.Input(shape=(160,160,3))
# OUTPUTS
model_output_1 = tf.keras.layers.Conv2D(filters=64,
kernel_size=(1, 1),
use_bias=False,
kernel_initializer='he_normal',
name='conv_name_base')(model_input_1)
model_output_2 = tf.keras.layers.Lambda(function=resize_like,
arguments='ref_tensor': model_output_1)(model_input_2)
# MODEL
model = tf.keras.models.Model(inputs=[model_input_1, model_input_2],
outputs=model_output_2,
name="test_model")
# ############################################################# #
# ######### TRY TO UTILIZE PREDICT WITH DUMMY INPUT ########## #
dummy_input = [np.ones((1,160,160,3)), np.zeros((1,160,160,3))]
model.predict(x=dummy_input) # >>>>ERROR OCCURS HERE<<<<
# ############################################################# #
完全错误
>>> model.predict(x=dummy_input) # >>>>ERROR OCCURS HERE<<<<
Traceback (most recent call last):
File "/Users/<username>/.virtualenvs/<venv-name>/lib/python3.7/site-packages/tensorflow_core/python/eager/execute.py", line 61, in quick_execute
num_outputs)
TypeError: An op outside of the function building code is being passed
a "Graph" tensor. It is possible to have Graph tensors
leak out of the function building context by including a
tf.init_scope in your function building code.
For example, the following function will fail:
@tf.function
def has_init_scope():
my_constant = tf.constant(1.)
with tf.init_scope():
added = my_constant * 2
The graph tensor has name: conv_name_base_1/Identity:0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/<user-name>/.virtualenvs/<venv-name>/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training.py", line 1013, in predict
use_multiprocessing=use_multiprocessing)
File "/Users/<user-name>/.virtualenvs/<venv-name>/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 498, in predict
workers=workers, use_multiprocessing=use_multiprocessing, **kwargs)
File "/Users/<user-name>/.virtualenvs/<venv-name>/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 475, in _model_iteration
total_epochs=1)
File "/Users/<user-name>/.virtualenvs/<venv-name>/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2.py", line 128, in run_one_epoch
batch_outs = execution_function(iterator)
File "/Users/<user-name>/.virtualenvs/<venv-name>/lib/python3.7/site-packages/tensorflow_core/python/keras/engine/training_v2_utils.py", line 98, in execution_function
distributed_function(input_fn))
File "/Users/<user-name>/.virtualenvs/<venv-name>/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py", line 568, in __call__
result = self._call(*args, **kwds)
File "/Users/<user-name>/.virtualenvs/<venv-name>/lib/python3.7/site-packages/tensorflow_core/python/eager/def_function.py", line 638, in _call
return self._concrete_stateful_fn._filtered_call(canon_args, canon_kwds) # pylint: disable=protected-access
File "/Users/<user-name>/.virtualenvs/<venv-name>/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py", line 1611, in _filtered_call
self.captured_inputs)
File "/Users/<user-name>/.virtualenvs/<venv-name>/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py", line 1692, in _call_flat
ctx, args, cancellation_manager=cancellation_manager))
File "/Users/<user-name>/.virtualenvs/<venv-name>/lib/python3.7/site-packages/tensorflow_core/python/eager/function.py", line 545, in call
ctx=ctx)
File "/Users/<user-name>/.virtualenvs/<venv-name>/lib/python3.7/site-packages/tensorflow_core/python/eager/execute.py", line 75, in quick_execute
"tensors, but found ".format(keras_symbolic_tensors))
tensorflow.python.eager.core._SymbolicException: Inputs to eager execution function cannot be Keras symbolic tensors, but found [<tf.Tensor 'conv_name_base_1/Identity:0' shape=(None, 160, 160, 64) dtype=float32>]
我想到的一个潜在解决方案是用自定义层替换 Lambda 层......这似乎也解决了这个问题。不过,不确定围绕这一点的最佳实践是什么。代码如下。
# version 2.1.0
import tensorflow as tf
# version 1.18.1
import numpy as np
# ######## DEFINE CUSTOM LAYER DIRECTLY BY SUBCLASSING ######## #
class ResizeLike(tf.keras.layers.Layer):
""" tf.keras layer to resize a tensor to the reference tensor shape.
Attributes:
keras.layers.Layer: Base layer class.
This is the class from which all layers inherit.
- A layer is a class implementing common neural networks
operations, such as convolution, batch norm, etc.
- These operations require managing weights,
losses, updates, and inter-layer connectivity.
"""
def __init__(self, **kwargs):
super().__init__(**kwargs)
def call(self, inputs, **kwargs):
"""TODO: docstring
Args:
inputs (TODO): TODO
**kwargs:
TODO
Returns:
TODO
"""
input_tensor, ref_tensor = inputs
return self.resize_like(input_tensor, ref_tensor)
def resize_like(self, input_tensor, ref_tensor):
""" Resize an image tensor to the same size/shape as a reference image tensor
Args:
input_tensor: (image tensor) Input image tensor that will be resized
ref_tensor: (image tensor) Reference image tensor that we want to resize the input tensor to.
Returns:
reshaped tensor
"""
reshaped_tensor = tf.image.resize(images=input_tensor,
size=tf.shape(ref_tensor)[1:3],
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR,
preserve_aspect_ratio=False,
antialias=False)
return reshaped_tensor
# ############################################################# #
# ############ DEFINE MODEL USING TF.KERAS FN API ############ #
# INPUTS
model_input_1 = tf.keras.layers.Input(shape=(160,160,3))
model_input_2 = tf.keras.layers.Input(shape=(160,160,3))
# OUTPUTS
model_output_1 = tf.keras.layers.Conv2D(filters=64,
kernel_size=(1, 1),
use_bias=False,
kernel_initializer='he_normal',
name='conv_name_base')(model_input_1)
model_output_2 = ResizeLike(name="resize_layer")([model_input_2, model_output_1])
# MODEL
model = tf.keras.models.Model(inputs=[model_input_1, model_input_2],
outputs=model_output_2,
name="test_model")
# ############################################################# #
# ######### TRY TO UTILIZE PREDICT WITH DUMMY INPUT ########## #
dummy_input = [np.ones((1,160,160,3)), np.zeros((1,160,160,3))]
model.predict(x=dummy_input) # >>>>ERROR OCCURS HERE<<<<
# ############################################################# #
想法??
提前致谢!!
如果您希望我提供其他任何内容,请告诉我。
【问题讨论】:
【参考方案1】:您可以尝试以下步骤:
将resize_like
改成如下:
def resize_like(inputs):
input_tensor, ref_tensor = inputs
reshaped_tensor = tf.image.resize(images=input_tensor,
size=tf.shape(ref_tensor)[1:3],
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR,
preserve_aspect_ratio=False,
antialias=False,
name=None)
return reshaped_tensor
然后,在Lambda
层:
model_output_2 = tf.keras.layers.Lambda(function=resize_like)([model_input_2, model_output_1])
【讨论】:
太棒了!谢谢!你认为坚持使用 lambda 层是更好的做法吗?还是要子类化并制作一个自定义层类,如原帖底部所示?Lambda
图层非常适合快速原型制作,但我更喜欢为具有多个操作的任何内容编写自定义图层。在构建最终模型时,坚持使用自定义层。 Lambda 只会在以后引起问题。
你能解释一下为什么函数的第一行能解决问题吗?
Tensorflow 将层的第一个参数作为其输入。输入的类型在很大程度上与列表、元组、numpy 数组或张量无关。在这种情况下,我们将输入作为列表传递,并在函数中解包。
我可以请您@DarienSchettler 展示您的代码并实现自定义层吗?我有一个与你类似的问题,我不知道如何使用我的自定义层来解决它。以上是关于Tensorflow 2.1.0 - 函数构建代码之外的操作正在传递一个“图形”张量的主要内容,如果未能解决你的问题,请参考以下文章
如何保存表示在 Tensorflow 中构建的神经网络的对象
AttributeError:“Tensor”对象在自定义损失函数中没有属性“numpy”(Tensorflow 2.1.0)