尝试在张量流上使用 RMSPropOptimizer 时出现 FailedPreconditionError

Posted

技术标签:

【中文标题】尝试在张量流上使用 RMSPropOptimizer 时出现 FailedPreconditionError【英文标题】:FailedPreconditionError while trying to use RMSPropOptimizer on tensorflow 【发布时间】:2016-06-06 02:08:41 【问题描述】:

我正在尝试使用 RMSPropOptimizer 来最大程度地减少损失。这是相关的代码部分:

import tensorflow as tf

#build large convnet...
#...

opt = tf.train.RMSPropOptimizer(learning_rate=0.0025, decay=0.95)

#do stuff to get targets and loss...
#...

grads_and_vars = opt.compute_gradients(loss)
capped_grads_and_vars = [(tf.clip_by_value(g, -1, 1), v) for g, v in grads_and_vars]
opt_op = self.opt.apply_gradients(capped_grads_and_vars)

sess = tf.Session()
sess.run(tf.initialize_all_variables())
while(1):
    sess.run(opt_op)

问题是我一运行就收到以下错误:

W tensorflow/core/common_runtime/executor.cc:1091] 0x10a0bba40 Compute status: Failed precondition: Attempting to use uninitialized value train/output/bias/RMSProp
     [[Node: RMSProp/update_train/output/bias/ApplyRMSProp = ApplyRMSProp[T=DT_FLOAT, use_locking=false, _device="/job:localhost/replica:0/task:0/cpu:0"](train/output/bias, train/output/bias/RMSProp, train/output/bias/RMSProp_1, RMSProp/learning_rate, RMSProp/decay, RMSProp/momentum, RMSProp/epsilon, clip_by_value_9)]]
     [[Node: _send_MergeSummary/MergeSummary_0 = _Send[T=DT_STRING, client_terminated=true, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=-6901001318975381332, tensor_name="MergeSummary/MergeSummary:0", _device="/job:localhost/replica:0/task:0/cpu:0"](MergeSummary/MergeSummary)]]
Traceback (most recent call last):
  File "dqn.py", line 213, in <module>
    result = sess.run(opt_op)
  File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 385, in run
    results = self._do_run(target_list, unique_fetch_targets, feed_dict_string)
  File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 461, in _do_run
    e.code)
tensorflow.python.framework.errors.FailedPreconditionError: Attempting to use uninitialized value train/output/bias/RMSProp
     [[Node: RMSProp/update_train/output/bias/ApplyRMSProp = ApplyRMSProp[T=DT_FLOAT, use_locking=false, _device="/job:localhost/replica:0/task:0/cpu:0"](train/output/bias, train/output/bias/RMSProp, train/output/bias/RMSProp_1, RMSProp/learning_rate, RMSProp/decay, RMSProp/momentum, RMSProp/epsilon, clip_by_value_9)]]
Caused by op u'RMSProp/update_train/output/bias/ApplyRMSProp', defined at: 
  File "dqn.py", line 159, in qLearnMinibatch
    opt_op = self.opt.apply_gradients(capped_grads_and_vars)
  File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 288, in apply_gradients
    update_ops.append(self._apply_dense(grad, var))
  File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/training/rmsprop.py", line 103, in _apply_dense
    grad, use_locking=self._use_locking).op
  File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/training/gen_training_ops.py", line 171, in apply_rms_prop
    grad=grad, use_locking=use_locking, name=name)
  File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/ops/op_def_library.py", line 659, in apply_op
    op_def=op_def)
  File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1904, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "/Users/home/miniconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1083, in __init__
    self._traceback = _extract_stack()

请注意,如果使用通常的 GradientDescentOptimizer,我不会收到此错误。正如您在上面看到的,我正在初始化我的变量,但我不知道“train/output/bias/RMSProp”是什么,因为我没有创建任何这样的变量。我只有“train/output/bias/”,它在上面被初始化了。

谢谢!

【问题讨论】:

【参考方案1】:

因此,对于未来遇到类似麻烦的人,我发现这篇文章很有帮助: Tensorflow: Using Adam optimizer

基本上,我在跑步

sess.run(tf.initialize_all_variables()) 

在我定义损失最小化操作之前

loss = tf.square(targets)
#create the gradient descent op
grads_and_vars = opt.compute_gradients(loss)
capped_grads_and_vars = [(tf.clip_by_value(g, -self.clip_delta, self.clip_delta), v) for g, v in grads_and_vars]    #gradient capping
self.opt_op = self.opt.apply_gradients(capped_grads_and_vars)

这需要在运行初始化操作之前完成!

【讨论】:

以上是关于尝试在张量流上使用 RMSPropOptimizer 时出现 FailedPreconditionError的主要内容,如果未能解决你的问题,请参考以下文章

使用 REST API 将文件流上传到 AZURE BLOB STORAGE

如何在 Java 流上调用多个终端操作

如何在 Github 工作流上运行 Ruby 脚本

用PowerDesigner画数据流图,如何在数据流上加文本?

为啥通过内存流上传 JSON 时 BlobClient.UploadAsync 会挂起?

有没有办法在 Java 流上使用条件谓词?