尝试使用未初始化的值 - 即使我进行了初始化
Posted
技术标签:
【中文标题】尝试使用未初始化的值 - 即使我进行了初始化【英文标题】:Attempting to use uninitialized value - even if I did initialization 【发布时间】:2019-02-17 03:06:24 【问题描述】:全局初始化后也会出现初始化错误
关于初始化的错误是这样的:
FailedPreconditionError:尝试使用未初始化的值偏差 [[节点:biases/read = IdentityT=DT_FLOAT, _class=["loc:@Adagrad/update_biases/ApplyAdagrad"], _device="/job:localhost/replica:0/task:0/device:CPU:0"] ]
import functools
def lazy_property(function):
attribute = '_cache_' + function.__name__
@property
@functools.wraps(function)
def decorator(self):
if not hasattr(self, attribute):
setattr(self, attribute, function(self))
return getattr(self, attribute)
return decorator
class Model:
def __init__(self, data, target):
self.data = data
self.target = target
self._logits = None
self._prediction = None
self._optimize = None
self._error = None
@lazy_property
def logits(self):
w = tf.Variable(tf.truncated_normal([784, 1]), name='weights')
b = tf.Variable(tf.zeros([1]), name='biases')
self._logits = tf.matmul(self.data, w) + b
return self._logits
@lazy_property
def prediction(self):
self._prediction = tf.nn.softmax(self.logits)
return self._prediction
@lazy_property
def optimize(self):
labels = tf.to_int64(self.target)
logits = self.prediction
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=labels, name='xentropy')
loss = tf.reduce_mean(cross_entropy, name='xentropy_mean')
self._optimize = tf.train.AdagradOptimizer(0.05).minimize(loss)
return self._optimize
@lazy_property
def error(self):
mistakes = tf.not_equal(tf.argmax(self.target, 1), tf.argmax(self.prediction, 1))
return tf.reduce_mean(tf.cast(mistakes, tf.float32))
batch_size = 100
num_steps = 1000
tf.reset_default_graph()
data = MNIST(data_dir="data/MNIST/")
X = tf.placeholder(tf.float32, [batch_size, 784], name='Placeholder_Input')
Y = tf.placeholder(tf.int64, [batch_size], name='Placeholder_Output')
model = Model(X, Y)
with tf.Session() as session:
session.run(tf.global_variables_initializer())
for step in range(num_steps):
model = Model(X,Y)
for _ in range(100):
x_batch, y_true_batch, _ = data.random_batch(batch_size=batch_size)
y_true_batch = np.argmax(y_true_batch, axis=1)
error,_ = session.run(model.optimize, feed_dict=X: x_batch, Y: y_true_batch)
if (step % 100 == 0):
print("Error rate @ iter %d : %f" % (step, error))
【问题讨论】:
您将_
放在for
循环中的第一个位置,然后分配data.random_batch
的输出,然后再分配sesseion.run
的输出。修复它。此外,您仅在 session.run
中运行 model.optimize
,因此您无需将其分配给两个变量。你可以做error= session.run(model.optimize, feed_dict=X: x_batch, Y: y_true_batch)
【参考方案1】:
模型完全定义后,您应该运行session.run(tf.global_variables_initializer())
。请注意,您在每一步都定义了一个新模型,并且只有在您调用 model.optimize
时才会实例化变量。这是我的建议:
model = Model(X,Y)
optimize = model.optimize
with tf.Session() as session:
session.run(tf.global_variables_initializer())
for step in range(num_steps):
for _ in range(100):
x_batch, y_true_batch, _ = data.random_batch(batch_size=batch_size)
y_true_batch = np.argmax(y_true_batch, axis=1)
error,_ = session.run(optimize, feed_dict=X: x_batch, Y: y_true_batch)
if (step % 100 == 0):
print("Error rate @ iter %d : %f" % (step, error))
【讨论】:
以上是关于尝试使用未初始化的值 - 即使我进行了初始化的主要内容,如果未能解决你的问题,请参考以下文章
Valgrind:条件跳转或移动取决于未初始化的值,即使我初始化内存
TensorFlow:变量初始化中的“尝试使用未初始化的值”
FailedPreconditionError:尝试使用未初始化的值Adam / lr
FailedPreconditionError:尝试使用未初始化的值
java.lang.IllegalStateException:WorkManager 已经初始化。即使工作管理器未初始化