ValueError:无法为具有形状“(?,784)”的张量“x:0”提供形状(784,)的值

Posted

技术标签:

【中文标题】ValueError:无法为具有形状“(?,784)”的张量“x:0”提供形状(784,)的值【英文标题】:ValueError: Cannot feed value of shape (784,) for Tensor 'x:0', which has shape '(?, 784)' 【发布时间】:2019-02-19 18:37:33 【问题描述】:

这是我第一次使用 Tensorflow。这个 ValueError 似乎有 many queries,但是我没有得到任何缓解。我正在使用 notMNIST 数据集,它是拆分 70/30 训练测试。

错误消息似乎表明我的小批量存在问题。我已经打印了占位符的形状,重新调整了输入和标签数据,但没有成功。

import tensorflow as tf

tf.reset_default_graph()

num_inputs = 28*28 # Size of images in pixels
num_hidden1 = 500
num_hidden2 = 500
num_outputs = len(np.unique(y)) # Number of classes (labels)
learning_rate = 0.0011

inputs = tf.placeholder(tf.float32, shape=[None, num_inputs], name="x")
labels = tf.placeholder(tf.int32, shape=[None], name = "y")

print(np.expand_dims(inputs, axis=0))
print(np.expand_dims(labels, axis=0))

def neuron_layer(x, num_neurons, name, activation=None):
with tf.name_scope(name):
        num_inputs = int(x.get_shape()[1])
        stddev = 2 / np.sqrt(num_inputs)
        init = tf.truncated_normal([num_inputs, num_neurons], stddev=stddev)
        W = tf.Variable(init, name = "weights")
        b = tf.Variable(tf.zeros([num_neurons]), name= "biases")
        z = tf.matmul(x, W) + b
    if activation == "sigmoid":
        return tf.sigmoid(z)
    elif activation == "relu":
        return tf.nn.relu(z)
    else:
        return z


with tf.name_scope("dnn"):
    hidden1 = neuron_layer(inputs, num_hidden1, "hidden1", activation="relu")
    hidden2 = neuron_layer(hidden1, num_hidden2, "hidden2", activation="relu")
    logits = neuron_layer(hidden2, num_outputs, "output")

with tf.name_scope("loss"):
    xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=labels, logits=logits)
    loss = tf.reduce_mean(xentropy, name="loss")

with tf.name_scope("evaluation"):
    correct = tf.nn.in_top_k(logits, labels, 1)
    accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))

with tf.name_scope("train"):
    optimizer = tf.train.GradientDescentOptimizer(learning_rate)
    grads = optimizer.compute_gradients(loss)    
    training_op = optimizer.apply_gradients(grads)

for var in tf.trainable_variables():
    tf.summary.histogram(var.op.name + "/values", var)

for grad, var in grads:
    if grad is not None:
        tf.summary.histogram(var.op.name + "/gradients", grad)

# summary
accuracy_summary = tf.summary.scalar('accuracy', accuracy)


# merge all summary
tf.summary.histogram('hidden1/activations', hidden1)
tf.summary.histogram('hidden2/activations', hidden2)
merged = tf.summary.merge_all()

init = tf.global_variables_initializer()
saver = tf.train.Saver()

from datetime import datetime
now = datetime.utcnow().strftime("%Y%m%d%H%M%S")
root_logdir = "tf_logs/example03/dnn_final"
logdir = "/run-/".format(root_logdir, now)

train_writer = tf.summary.FileWriter("models/dnn0/train", 
tf.get_default_graph())
test_writer = tf.summary.FileWriter("models/dnn0/test", tf.get_default_graph())

num_epochs = 50
batch_size = 128


with tf.Session() as sess:
    init.run()
    print("Epoch\tTrain accuracy\tTest accuracy")
    for epoch in range(num_epochs):
        for idx_start in range(0, x_train.shape[0], batch_size):
            idx_end = num_epochs
            x_batch, y_batch = x_train[batch_size], y_train[batch_size]
            sess.run(training_op, feed_dict=inputs: x_batch, labels: y_batch)

        summary_train, acc_train = sess.run([merged, accuracy],
                                       feed_dict=x: x_batch, y: y_batch)
        summary_test, acc_test = sess.run([accuracy_summary, accuracy],
                                     feed_dict=x: x_test, y: y_test)

        train_writer.add_summary(summary_train, epoch)
        test_writer.add_summary(summary_test, epoch)

        print("\t\t".format(epoch, acc_train, acc_test))

    save_path = saver.save(sess, "models/dnn0.ckpt")

以下错误

ValueError: 无法为形状为“(?, 784)”的张量“x:0”提供形状 (784,) 的值

出现在第 96 行

sess.run(training_op, feed_dict=inputs: x_batch, labels: y_batch)

【问题讨论】:

使用np.expand_dims(x_batch, axis=0) 什么时候执行此操作最好? 替换为sess.run(training_op, feed_dict=inputs:np.expand_dims( x_batch, axis=0), labels: y_batch) 谢谢,看起来可行。 y_batch 现在有同样的问题,但没有响应相同的修复。 ValueError: Cannot feed value of shape () for Tensor 'y:0', which has shape '(?,)' 【参考方案1】:

在这一行中,您指的是inputslabels

sess.run(training_op, feed_dict=inputs: x_batch, labels: y_batch)

如下几行所示,

summary_train, acc_train = sess.run([merged, accuracy],
                                   feed_dict=x: x_batch, y: y_batch)
summary_test, acc_test = sess.run([accuracy_summary, accuracy],
                                 feed_dict=x: x_test, y: y_test)

您指的是xy。将这些更改为相同。 IE。它应该与占位符变量的值相同。 (inputslabels

【讨论】:

【参考方案2】:

你的张量确实有混合的形状。您将批处理索引位于末尾的张量输入到批处理索引位于前面的张量中。

在输入张量之前执行x_batch = numpy.swapaxes(x_batch, 1, 0)

【讨论】:

感谢 Andreas,这似乎是返回 ValueError: bad axis1 argument to swapaxes。

以上是关于ValueError:无法为具有形状“(?,784)”的张量“x:0”提供形状(784,)的值的主要内容,如果未能解决你的问题,请参考以下文章

ValueError:无法为具有形状“(?,4)”的张量“Placeholder_36:0”提供形状(4,)的值

ValueError:无法为具有形状“(?,1)”的张量“Placeholder_1:0”提供形状(6165、5)的值

ValueError:检查目标时出错:预期activation_6具有形状(无,2)但得到的数组具有形状(5760,1)

ValueError:检查目标时出错:预期 activation_17 具有 2 维,但得到的数组形状为 (1, 256, 256, 3)

ValueError:检查目标时出错:预期(keras 序列模型层)具有 n 维,但得到的数组具有形状

ValueError:检查输入时出错:预期dense_11_input 具有3 维,但得到了形状为(0, 1) 的数组