Udacity深度学习,作业3,第3部分:Tensorflow辍学功能
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Udacity深度学习,作业3,第3部分:Tensorflow辍学功能相关的知识,希望对你有一定的参考价值。
我现在正在进行Udacity深度学习课程的作业3。我已经完成了大部分工作并且它正在工作但是我注意到问题3,即使用“dropout”和tensorflow,似乎会降低我的性能而不是改进它。
所以我觉得我做错了什么。我会把我的完整代码放在这里。如果有人能向我解释如何正确使用辍学,我会很感激。 (或者确认我正确使用它并且在这种情况下它没有帮助。)。它将准确度从超过94%(没有辍学)下降到91.5%。如果您不使用L2正则化,则降级甚至更大。
def create_nn(dataset, weights_hidden, biases_hidden, weights_out, biases_out):
# Original layer
logits = tf.add(tf.matmul(tf_train_dataset, weights_hidden), biases_hidden)
# Drop Out layer 1
logits = tf.nn.dropout(logits, 0.5)
# Hidden Relu layer
logits = tf.nn.relu(logits)
# Drop Out layer 2
logits = tf.nn.dropout(logits, 0.5)
# Output: Connect hidden layer to a node for each class
logits = tf.add(tf.matmul(logits, weights_out), biases_out)
return logits
# Create model
batch_size = 128
hidden_layer_size = 1024
beta = 1e-3
graph = tf.Graph()
with graph.as_default():
# Input data. For the training data, we use a placeholder that will be fed
# at run time with a training minibatch.
tf_train_dataset = tf.placeholder(tf.float32,
shape=(batch_size, image_size * image_size))
tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
tf_valid_dataset = tf.constant(valid_dataset)
tf_test_dataset = tf.constant(test_dataset)
# Variables.
weights_hidden = tf.Variable(
#tf.truncated_normal([image_size * image_size, num_labels]))
tf.truncated_normal([image_size * image_size, hidden_layer_size]))
#biases = tf.Variable(tf.zeros([num_labels]))
biases_hidden = tf.Variable(tf.zeros([hidden_layer_size]))
weights_out = tf.Variable(tf.truncated_normal([hidden_layer_size, num_labels]))
biases_out = tf.Variable(tf.zeros([num_labels]))
# Training computation.
#logits = tf.matmul(tf_train_dataset, weights_out) + biases_out
logits = create_nn(tf_train_dataset, weights_hidden, biases_hidden, weights_out, biases_out)
loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=tf_train_labels, logits=logits))
loss += beta * (tf.nn.l2_loss(weights_hidden) + tf.nn.l2_loss(weights_out))
# Optimizer.
optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)
# Predictions for the training, validation, and test data.
train_prediction = tf.nn.softmax(logits)
#valid_prediction = tf.nn.softmax(tf.matmul(tf_valid_dataset, weights_out) + biases_out)
#test_prediction = tf.nn.softmax(tf.matmul(tf_test_dataset, weights_out) + biases_out)
valid_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_valid_dataset, weights_hidden) + biases_hidden), weights_out) + biases_out)
test_prediction = tf.nn.softmax(tf.matmul(tf.nn.relu(tf.matmul(tf_test_dataset, weights_hidden) + biases_hidden), weights_out) + biases_out)
num_steps = 10000
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print("Initialized")
for step in range(num_steps):
# Pick an offset within the training data, which has been randomized.
# Note: we could use better randomization across epochs.
offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
#offset = (step * batch_size) % (3*128 - batch_size)
#print(offset)
# Generate a minibatch.
batch_data = train_dataset[offset:(offset + batch_size), :]
batch_labels = train_labels[offset:(offset + batch_size), :]
# Prepare a dictionary telling the session where to feed the minibatch.
# The key of the dictionary is the placeholder node of the graph to be fed,
# and the value is the numpy array to feed to it.
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels}
_, l, predictions = session.run([optimizer, loss, train_prediction], feed_dict=feed_dict)
if (step % 500 == 0):
print("Minibatch loss at step %d: %f" % (step, l))
print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
print("Validation accuracy: %.1f%%" % accuracy(valid_prediction.eval(), valid_labels))
print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))
你需要在推理期间关闭辍学。起初可能并不明显,但是在NN架构中硬丢码是硬编码的事实意味着它会在推理期间影响测试数据。您可以通过创建占位符keep_prob
来避免这种情况,而不是直接提供值0.5
。例如:
keep_prob = tf.placeholder(tf.float32)
logits = tf.nn.dropout(logits, keep_prob)
要在训练期间打开辍学,请将keep_prob
值设置为0.5:
feed_dict = {tf_train_dataset : batch_data, tf_train_labels : batch_labels, keep_prob: 0.5}
在推理/评估期间,您应该可以执行以下操作,在keep_prob
中将eval
设置为1.0:
accuracy.eval(feed_dict={x: test_prediction, y_: test_labels, keep_prob: 1.0}
编辑:
由于问题似乎并不是在推理中使用了辍学,下一个罪魁祸首就是该网络规模的辍学率太高。您可以尝试将丢失降低到20%(即keep_prob = 0.8),或者增加网络的大小以使模型有机会学习表示。
我实际上试了一下你的代码,而且这个网络大小的20%辍学我得到了大约93.5%。我在下面添加了一些额外的资源,包括原始的Dropout文章,以帮助澄清它背后的直觉,并在使用dropout时扩展更多提示,例如提高学习率。
参考文献:
- Deep MNIST for Experts:使用MNIST在上面(辍学开/关)有一个例子
- Dropout Regularization in Deep Learning Models With Keras
- Dropout: A Simple Way to Prevent Neural Networks from Overfitting
我认为有两件事可能导致问题。
首先,我不建议在第一层使用dropout(太多50%,使用较低,在10-25%范围内,如果必须)),因为当你使用如此高的丢失时,甚至更高级别的功能都不会被学习和传播更深层次。同时尝试从10%到50%的退出范围,看看准确度如何变化。没有办法事先知道什么价值会起作用
其次,你通常不会在推理中使用辍学。要将dropout的keep_prob参数中的传递修复为占位符,并在推理时将其设置为1。
此外,如果您说明的准确度值是训练准确度,那么首先可能没有太多问题,因为当您没有过度拟合时,辍学通常会少量降低训练准确度,其测试/验证准确性需要密切监视
以上是关于Udacity深度学习,作业3,第3部分:Tensorflow辍学功能的主要内容,如果未能解决你的问题,请参考以下文章
Udacity: 作业 3: ValueError: bad input shape (1000, 10)
深度学习100例 | 第2例:人脸表情识别 - PyTorch实现