TensorFlow:去除累积渐变中的nans
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了TensorFlow:去除累积渐变中的nans相关的知识,希望对你有一定的参考价值。
对于函数逼近问题,我试图积累梯度,但我发现有时这些梯度中的一些是纳米(即未定义),即使损失总是真实的。我认为这可能是由于数值不稳定性,我基本上是在寻找一种从计算梯度中去除nans的简单方法。
从solution to this question开始,我尝试了以下操作:
# Optimizer definition - nothing different from any classical example
opt = tf.train.AdamOptimizer()
## Retrieve all trainable variables you defined in your graph
tvs = tf.trainable_variables()
## Creation of a list of variables with the same shape as the trainable ones
# initialized with 0s
accum_vars = [tf.Variable(tf.zeros_like(tv.initialized_value()), trainable=False) for tv in tvs]
zero_ops = [tv.assign(tf.zeros_like(tv)) for tv in accum_vars]
## Calls the compute_gradients function of the optimizer to obtain... the list of gradients
gvs_ = opt.compute_gradients(rmse, tvs)
gvs =tf.where(tf.is_nan(gvs_), tf.zeros_like(gvs_), gvs_)
## Adds to each element from the list you initialized earlier with zeros its gradient (works because accum_vars and gvs are in the same order)
accum_ops = [accum_vars[i].assign_add(gv[0]) for i, gv in enumerate(gvs)]
## Define the training step (part with variable value update)
train_step = opt.apply_gradients([(accum_vars[i], gv[1]) for i, gv in enumerate(gvs)])
所以基本上,关键的想法是这一行:
gvs =tf.where(tf.is_nan(gvs_), tf.zeros_like(gvs_), gvs_)
但是当我应用这个想法时,我得到以下错误:
ValueError:尝试将'x'转换为张量并失败。错误:两个形状中的尺寸1必须相等,但是为30和9.形状为[2,30]和[2,9]。从形状2与其他形状合并。对于具有输入形状的'IsNan / packed'(op:'Pack'):[2,9,30],[2,30,9],[2,30],[2,9]。
答案
compute_gradients
返回您案件中的张量列表。你可能想做:
gvs_ = [(tf.where(tf.is_nan(grad), tf.zeros_like(grad), grad), val) for grad,val in gvs_]
以上是关于TensorFlow:去除累积渐变中的nans的主要内容,如果未能解决你的问题,请参考以下文章