如何修复 Tensorflow 2.0 中的“OperatorNotAllowedInGraphError”错误

Posted

技术标签:

【中文标题】如何修复 Tensorflow 2.0 中的“OperatorNotAllowedInGraphError”错误【英文标题】:how to fix "OperatorNotAllowedInGraphError " error in Tensorflow 2.0 【发布时间】:2020-01-13 06:40:55 【问题描述】:

我正在从official tutorials学习tensorflow2.0。我可以从下面的代码中理解结果。

def square_if_positive(x):
  return [i ** 2 if i > 0 else i for i in x]
square_if_positive(range(-5, 5))

# result
[-5, -4, -3, -2, -1, 0, 1, 4, 9, 16]

但是如果我用张量而不是 python 代码更改输入,就像这样

def square_if_positive(x):
  return [i ** 2 if i > 0 else i for i in x]
square_if_positive(tf.range(-5, 5))

我得到以下错误!

OperatorNotAllowedInGraphError            Traceback (most recent call last)
<ipython-input-39-6c17f29a3443> in <module>
      2 def square_if_positive(x):
      3     return [i**2 if i > 0 else i for i in x]
----> 4 square_if_positive(tf.range(10))
      5 # measure_graph_size(square_if_positive, range(10))

~/tf2_workspace/tf2.0/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py in __call__(self, *args, **kwds)
    437     # This is the first call of __call__, so we have to initialize.
    438     initializer_map = 
--> 439     self._initialize(args, kwds, add_initializers_to=initializer_map)
    440     if self._created_variables:
    441       try:

~/tf2_workspace/tf2.0/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
    380     self._concrete_stateful_fn = (
    381         self._stateful_fn._get_concrete_function_internal_garbage_collected(  # pylint: disable=protected-access
--> 382             *args, **kwds))
    383 
    384     def invalid_creator_scope(*unused_args, **unused_kwds):

~/tf2_workspace/tf2.0/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
   1793     if self.input_signature:
   1794       args, kwargs = None, None
-> 1795     graph_function, _, _ = self._maybe_define_function(args, kwargs)
   1796     return graph_function
   1797 

~/tf2_workspace/tf2.0/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py in _maybe_define_function(self, args, kwargs)
   2093         graph_function = self._function_cache.primary.get(cache_key, None)
   2094         if graph_function is None:
-> 2095           graph_function = self._create_graph_function(args, kwargs)
   2096           self._function_cache.primary[cache_key] = graph_function
   2097         return graph_function, args, kwargs

~/tf2_workspace/tf2.0/lib/python3.6/site-packages/tensorflow_core/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
   1984             arg_names=arg_names,
   1985             override_flat_arg_shapes=override_flat_arg_shapes,
-> 1986             capture_by_value=self._capture_by_value),
   1987         self._function_attributes,
   1988         # Tell the ConcreteFunction to clean up its graph once it goes out of

~/tf2_workspace/tf2.0/lib/python3.6/site-packages/tensorflow_core/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
    851                                           converted_func)
    852 
--> 853       func_outputs = python_func(*func_args, **func_kwargs)
    854 
    855       # invariant: `func_outputs` contains only Tensors, CompositeTensors,

~/tf2_workspace/tf2.0/lib/python3.6/site-packages/tensorflow_core/python/eager/def_function.py in wrapped_fn(*args, **kwds)
    323         # __wrapped__ allows AutoGraph to swap in a converted function. We give
    324         # the function a weak reference to itself to avoid a reference cycle.
--> 325         return weak_wrapped_fn().__wrapped__(*args, **kwds)
    326     weak_wrapped_fn = weakref.ref(wrapped_fn)
    327 

~/tf2_workspace/tf2.0/lib/python3.6/site-packages/tensorflow_core/python/framework/func_graph.py in wrapper(*args, **kwargs)
    841           except Exception as e:  # pylint:disable=broad-except
    842             if hasattr(e, "ag_error_metadata"):
--> 843               raise e.ag_error_metadata.to_exception(type(e))
    844             else:
    845               raise

OperatorNotAllowedInGraphError: in converted code:

    <ipython-input-37-6c17f29a3443>:3 square_if_positive  *
        return [i**2 if i > 0 else i for i in x]
    /Users/zhangpan/tf2_workspace/tf2.0/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:547 __iter__
        self._disallow_iteration()
    /Users/zhangpan/tf2_workspace/tf2.0/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:540 _disallow_iteration
        self._disallow_when_autograph_enabled("iterating over `tf.Tensor`")
    /Users/zhangpan/tf2_workspace/tf2.0/lib/python3.6/site-packages/tensorflow_core/python/framework/ops.py:518 _disallow_when_autograph_enabled
        " decorating it directly with @tf.function.".format(task))

    OperatorNotAllowedInGraphError: iterating over `tf.Tensor` is not allowed: AutoGraph did not convert this function. Try decorating it directly with @tf.function.

我找不到有关此错误的任何说明。我认为真正的原因不是“不允许迭代tf.Tensor”。因为我可以这样写。

@tf.function
def square_if_positive(x):
    for i in x:
        if i>0:
            tf.print(i**2)
        else:
            tf.print(i)
square_if_positive(tf.range(10))

我像上面的代码一样迭代张量。

所以我的问题是关于这个错误的真正原因是什么?任何建议都会帮助我。看了很多资料,实在看不懂这个错误。

【问题讨论】:

【参考方案1】:

根本原因是 autograph 还不支持列表推导(主要是因为在所有情况下都很难确定结果的 dtype)

作为一种解决方法,您可以使用 tf.map_fn 进行理解:

return tf.map_fn(lambda i: i ** 2 if i > 0 else i, x)

更多信息请查看issue

【讨论】:

【参考方案2】:

如果它对某人有帮助。

我的代码遇到了同样的问题:

for index, image in enumerate(inputs):
    ... My code ...

解决办法就是这样做:

index = 0
for image in inputs:
    .... My code ...
    index += 1

【讨论】:

【参考方案3】:

在使用 tf.range() 而不是 python 的 range() 在 tensorflow 图函数中进行列表理解时,我遇到了类似的问题。我正在训练一个 3D 分割神经网络,必须使用 range() 才能使代码正常工作。

检查下面的伪代码:-

Y         = # [Batch,Height,Width,Depth,Channels]
y_predict = # [B,H,W,D,C,MC_Runs] ; MC_Runs=Monte Carlo Runs

@tf.function
def train_loss(Y,y_predict):
    # calulate loss and return scalar value

@tf.function
def train_step():
    loss = [train_loss(Y, y_predict[:,:,:,:,:,id_])) for id_ in range(MC_RUNS)]
    loss = tf.math.reduce_mean(loss)

【讨论】:

以上是关于如何修复 Tensorflow 2.0 中的“OperatorNotAllowedInGraphError”错误的主要内容,如果未能解决你的问题,请参考以下文章

如何理解TensorFlow中的tensor

如何从tensorflow 2.0中的tf.function获取图形?

如何修改 Tensorflow 2.0 中的 epoch 数?

如何保存使用Tensorflow 1.xx中的.meta检查点模型作为部分的Tensorflow 2.0模型?

如何更改 gpt-2 代码以使用 Tensorflow 2.0?

如何修复 tensorflow 中的“ValueError:空训练数据”错误