TensorFlow初步了解
Posted kossle
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了TensorFlow初步了解相关的知识,希望对你有一定的参考价值。
一.基于TensorFlow的softmax回归模型解决手写字母识别问题
详细步骤如下:
1.加载MNIST数据: input_data.read_data_sets(\'MNIST_data\',one_hot=true)
2.运行TensorFlow的InterractiveSession: sess = tf.InteractiveSession()
3.构建Softmax回归模型: 占位符tf.placeholder 变量tf.Variable 类别预测与损失函数 tf.nn.softmax tf.refuce_sum 训练模型 tf.train.GradientDescentOptimizer 评估模型
结果:在测试集上有91%正确率
二.构建多层卷积网络
详细步骤如下:
1.权重初始化函数
2.卷积和池化函数
3.第一层卷积
4.第二层卷积
5.密集连接层
6.输出层
7.训练和评估模型
代码:(DeepMnist.py)
1 from tensorflow.examples.tutorials.mnist import input_data 2 mnist = input_data.read_data_sets(\'MNIST_data\', one_hot=True) 3 4 import tensorflow as tf 5 sess = tf.InteractiveSession() 6 7 x = tf.placeholder(tf.float32, shape=[None, 784]) 8 y_ = tf.placeholder(tf.float32, shape=[None, 10]) 9 10 w = tf.Variable(tf.zeros([784, 10])) 11 b = tf.Variable(tf.zeros([10])) 12 13 sess.run(tf.global_variables_initializer()) 14 15 y = tf.matmul(x ,w) + b 16 17 cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels = y_, logits=y)) 18 19 train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy) 20 21 for _ in range(1000): 22 batch = mnist.train.next_batch(100) 23 train_step.run(feed_dict={x:batch[0],y_:batch[1]}) 24 25 correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(y_,1)) 26 27 accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32)) 28 29 print(accuracy.eval(feed_dict={x:mnist.test.images,y_:mnist.test.labels})) 30 31 def weight_variable(shape): 32 initial = tf.truncated_normal(shape, stddev=0.1) 33 return tf.Variable(initial) 34 35 def bias_variable(shape): 36 initial = tf.constant(0.1,shape=shape) 37 return tf.Variable(initial) 38 39 def conv2d(x,w): 40 return tf.nn.conv2d(x,w,strides=[1,1,1,1],padding=\'SAME\') 41 42 def max_pool_2x2(x): 43 return tf.nn.max_pool(x,ksize=[1,2,2,1],strides=[1,2,2,1],padding=\'SAME\') 44 45 w_conv1 = weight_variable([5,5,1,32]) 46 b_conv1 = bias_variable([32]) 47 48 x_image = tf.reshape(x, [-1,28,28,1]) 49 50 h_conv1 = tf.nn.relu(conv2d(x_image, w_conv1) + b_conv1) 51 h_pool1 = max_pool_2x2(h_conv1) 52 53 w_conv2 = weight_variable([5,5,32,64]) 54 b_conv2 = bias_variable([64]) 55 56 h_conv2 = tf.nn.relu(conv2d(h_pool1, w_conv2) + b_conv2) 57 h_pool2 = max_pool_2x2(h_conv2) 58 59 w_fc1 = weight_variable([7*7*64,1024]) 60 b_fc1 =bias_variable([1024]) 61 62 h_pool2_flat = tf.reshape(h_pool2, [-1,7*7*64]) 63 h_fc1 =tf.nn.relu(tf.matmul(h_pool2_flat,w_fc1) + b_fc1) 64 65 keep_prob = tf.placeholder(tf.float32) 66 h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob) 67 68 W_fc2 = weight_variable([1024, 10]) 69 b_fc2 = bias_variable([10]) 70 71 y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2 72 73 cross_entropy = tf.reduce_mean( 74 tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv)) 75 train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy) 76 correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1)) 77 accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) 78 sess.run(tf.global_variables_initializer()) 79 for i in range(1000): 80 batch = mnist.train.next_batch(50) 81 if i%100 == 0: 82 train_accuracy = accuracy.eval(feed_dict={ 83 x:batch[0], y_: batch[1], keep_prob: 1.0}) 84 print("step %d, training accuracy %g"%(i, train_accuracy)) 85 train_step.run(feed_dict={x: batch[0], y_: batch[1], keep_prob: 0.5}) 86 87 print("test accuracy %g"%accuracy.eval(feed_dict={ 88 x: mnist.test.images, y_: mnist.test.labels, keep_prob: 1.0}))
输出:
训练1000次,测试准确率96.34%;20000次准确率达到99%以上;
三.简易前馈神经网络
代码如下:
1 # Copyright 2015 The TensorFlow Authors. All Rights Reserved. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 # ============================================================================== 15 16 """Builds the MNIST network. 17 Implements the inference/loss/training pattern for model building. 18 1. inference() - Builds the model as far as is required for running the network 19 forward to make predictions. 20 2. loss() - Adds to the inference model the layers required to generate loss. 21 3. training() - Adds to the loss model the Ops required to generate and 22 apply gradients. 23 This file is used by the various "fully_connected_*.py" files and not meant to 24 be run. 25 """ 26 from __future__ import absolute_import 27 from __future__ import division 28 from __future__ import print_function 29 30 import math 31 32 import tensorflow as tf 33 34 # The MNIST dataset has 10 classes, representing the digits 0 through 9. 35 NUM_CLASSES = 10 36 37 # The MNIST images are always 28x28 pixels. 38 IMAGE_SIZE = 28 39 IMAGE_PIXELS = IMAGE_SIZE * IMAGE_SIZE 40 41 42 def inference(images, hidden1_units, hidden2_units): 43 """Build the MNIST model up to where it may be used for inference. 44 Args: 45 images: Images placeholder, from inputs(). 46 hidden1_units: Size of the first hidden layer. 47 hidden2_units: Size of the second hidden layer. 48 Returns: 49 softmax_linear: Output tensor with the computed logits. 50 """ 51 # Hidden 1 52 with tf.name_scope(\'hidden1\'): 53 weights = tf.Variable( 54 tf.truncated_normal([IMAGE_PIXELS, hidden1_units], 55 stddev=1.0 / math.sqrt(float(IMAGE_PIXELS))), 56 name=\'weights\') 57 biases = tf.Variable(tf.zeros([hidden1_units]), 58 name=\'biases\') 59 hidden1 = tf.nn.relu(tf.matmul(images, weights) + biases) 60 # Hidden 2 61 with tf.name_scope(\'hidden2\'): 62 weights = tf.Variable( 63 tf.truncated_normal([hidden1_units, hidden2_units], 64 stddev=1.0 / math.sqrt(float(hidden1_units))), 65 name=\'weights\') 66 biases = tf.Variable(tf.zeros([hidden2_units]), 67 name=\'biases\') 68 hidden2 = tf.nn.relu(tf.matmul(hidden1, weights) + biases) 69 # Linear 70 with tf.name_scope(\'softmax_linear\'): 71 weights = tf.Variable( 72 tf.truncated_normal([hidden2_units, NUM_CLASSES], 73 stddev=1.0 / math.sqrt(float(hidden2_units))), 74 name=\'weights\') 75 biases = tf.Variable(tf.zeros([NUM_CLASSES]), 76 name=\'biases\') 77 logits = tf.matmul(hidden2, weights) + biases 78 return logits 79 80 81 def loss(logits, labels): 82 """Calculates the loss from the logits and the labels. 83 Args: 84 logits: Logits tensor, float - [batch_size, NUM_CLASSES]. 85 labels: Labels tensor, int32 - [batch_size]. 86 Returns: 87 loss: Loss tensor of type float. 88 """ 89 labels = tf.to_int64(labels) 90 cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits( 91 labels=labels, logits=logits, name=\'xentropy\') 92 return tf.reduce_mean(cross_entropy, name=\'xentropy_mean\') 93 94 95 def training(loss, learning_rate): 96 """Sets up the training Ops. 97 Creates a summarizer to track the loss over time in TensorBoard. 98 Creates an optimizer and applies the gradients to all trainable variables. 99 The Op returned by this function is what must be passed to the 100 `sess.run()` call to cause the model to train. 101 Args: 102 loss: Loss tensor, from loss(). 103 learning_rate: The learning rate to use for gradient descent. 104 Returns: 105 train_op: The Op for training. 106 """ 107 # Add a scalar summary for the snapshot loss. 108 tf.summary.scalar(\'loss\', loss) 109 # Create the gradient descent optimizer with the given learning rate. 110 optimizer = tf.train.GradientDescentOptimizer(learning_rate) 111 # Create a variable to track the global step. 112 global_step = tf.Variable(0, name=\'global_step\', trainable=False) 113 # Use the optimizer to apply the gradients that minimize the loss 114 # (and also increment the global step counter) as a single training step. 115 train_op = optimizer.minimize(loss, global_step=global_step) 116 return train_op 117 118 119 def evaluation(logits, labels): 120 """Evaluate the quality of the logits at predicting the label. 121 Args: 122 logits: Logits tensor, float - [batch_size, NUM_CLASSES]. 123 labels: Labels tensor, int32 - [batch_size], with values in the 124 range [0, NUM_CLASSES). 125 Returns: 126 A scalar int32 tensor with the number of examples (out of batch_size) 127 that were predicted correctly. 128 """ 129 # For a classifier model, we can use the in_top_k Op. 130 # It returns a bool tensor with shape [batch_size] that is true for 131 # the examples where the label is in the top k (here k=1) 132 # of all logits for that example. 133 correct = tf.nn.in_top_k(logits, labels, 1) 134 # Return the number of true entries. 135 return tf.reduce_sum(tf.cast(correct, tf.int32))
1 # Copyright 2015 The TensorFlow Authors. All Rights Reserved. 2 # 3 # Licensed under the Apache License, Version 2.0 (the "License"); 4 # you may not use this file except in compliance with the License. 5 # You may obtain a copy of the License at 6 # 7 # http://www.apache.org/licenses/LICENSE-2.0 8 # 9 # Unless required by applicable law or agreed to in writing, software 10 # distributed under the License is distributed on an "AS IS" BASIS, 11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 12 # See the License for the specific language governing permissions and 13 # limitations under the License. 14 # ============================================================================== 15 16 """Trains and Evaluates the MNIST network using a feed dictionary.""" 17 from __future__ import absolute_import 18 from __future__ import division 19 from __future__ import print_function 20 21 # pylint: disable=missing-docstring 22 import argparse 23 import os.path 24 import sys 25 import time 26 27 from six.moves import xrange # pylint: disable=redefined-builtin 28 import tensorflow as tf 29 30 from tensorflow.examples.tutorials.mnist import input_data 31 from tensorflow.examples.tutorials.mnist import mnist 32 33 # Basic model parameters as external flags. 34 FLAGS = None 35 36 37 def placeholder_inputs(batch_size): 38 """Generate placeholder variables to represent the input tensors. 39 These placeholders are used as inputs by the rest of the model building 40 code and will be fed from the downloaded data in the .run() loop, below. 41 Args: 42 batch_size: The batch size will be baked into both placeholders. 43 Returns: 44 images_placeholder: Images placeholder. 45 labels_placeholder: Labels placeholder. 46 """ 47 # Note that the shapes of the placeholders match the shapes of the full 48 # image and label tensors, except the first dimension is now batch_size 49 # rather than the full size of the train or test data sets. 50 images_placeholder = tf.placeholder(tf.float32, shape=(batch_size, 51 mnist.IMAGE_PIXELS)) 52 labels_placeholder = tf.placeholder(tf.int32, shape=(batch_size)) 53 return images_placeholder, labels_placeholder 54 55 56 def fill_feed_dict(data_set, images_pl, labels_pl): 57 """Fills the feed_dict for training the given step. 58 A feed_dict takes the form of: 59 feed_dict = { 60 <placeholder>: <tensor of values to be passed for placeholder>, 61 .... 62 } 63 Args: 64 data_set: The set of images and labels, from input_data.read_data_sets() 65 images_pl: The images placeholder, from placeholder_inputs(). 66 labels_pl: The labels placeholder, from placeholder_inputs(). 67 Returns: 68 feed_dict: The feed dictionary mapping from placeholders to values. 69 """ 70 # Create the feed_dict for the placeholders filled with the next 71 # `batch size` examples. 72 images_feed, labels_feed = data_set.next_batch(FLAGS.batch_size, 73 FLAGS.fake_data) 74 feed_dict = { 75 images_pl: images_feed, 76 labels_pl: labels_feed, 77 } 78 return feed_dict 79 80 81 def do_eval(sess, 82 eval_correct, 83 images_placeholder, 84 labels_placeholder, 85 data_set): 86 """Runs one evaluation against the full epoch of data. 87 Args: 88 sess: The session in which the model has been trained. 89 eval_correct: The Tensor that returns the number of correct predictions. 90 images_placeholder: The images placeholder. 91 labels_placeholder: The labels placeholder. 92 data_set: The set of images and labels to evaluate, from 93 input_data.read_data_sets(). 94 """ 95 # And run one epoch of eval. 96 true_count = 0 # Counts the number of correct predictions. 97 steps_per_epoch = data_set.num_examples // FLAGS.batch_size 98 num_examples = steps_per_epoch * FLAGS.batch_size 99 for step in xrange(steps_per_epoch): 100 feed_dict = fill_feed_dict(data_set, 101 images_placeholder, 102 labels_placeholder) 103 true_count += sess.run(eval_correct, feed_dict=feed_dict) 104 precision = float(true_count) / num_examples 105 print(\' Num examples: %d Num correct: %d Precision @ 1: %0.04f\' % 106 (num_examples, true_count, precision)) 107 108 109 def run_training(): 110 """Train MNIST for a number of steps.""" 111 # Get the sets of images and labels for training, validation, and 112 # test on MNIST. 113 data_sets = input_data.read_data_sets(FLAGS.input_data_dir, FLAGS.fake_data) 114 115 # Tell TensorFlow that the model will be built into the default Graph. 116 with tf.Graph().as_default(): 117 # Generate placeholders for the images and labels. 118 images_placeholder, labels_placeholder = placeholder_inputs( 119 FLAGS.batch_size) 120 121 # Build a Graph that computes predictions from the inference model. 122 logits = mnist.inference(images_placeholder, 123 FLAGS.hidden1, 124 FLAGS.hidden2) 125 126 # Add to the Graph the Ops for loss calculation. 127 loss = mnist.loss(logits, labels_placeholder) 128 129 # Add to the Graph the Ops that calculate and apply gradients. 130 train_op = mnist.training(loss, FLAGS.learning_rate) 131 132 # Add the Op to compare the logits to the labels during evaluation. 133 eval_correct = mnist.evaluation(logits, labels_placeholder) 134 135 # Build the summary Tensor based on the TF collection of Summaries. 136 summary = tf.summary.merge_all() 137 138 # Add the variable initializer Op. 139 init = tf.global_variables_initializer() 140 141 # Create a saver for writing training checkpoints. 142 saver = tf.train.Saver() 143 144 # Create a session for running Ops on the Graph. 145 sess = tf.Session() 146 147 # Instantiate a SummaryWriter to output summaries and the Graph. 148 summary_writer = tf.summary.FileWriter(FLAGS.log_dir, sess.graph) 149 150 # And then after everything is built: 151 152 # Run the Op to initialize the variables. 153 sess.run(init) 154 155 # Start the training loop. 156 for step in xrange(FLAGS.max_steps): 157 start_time = time.time() 158 159 # Fill a feed dictionary with the actual set of images and labels 160 # for this particular training step. 161 feed_dict = fill_feed_dict(data_sets.train, 162 images_placeholder, 163 labels_placeholder) 164 165 # Run one step of the model. The return values are the activations 166 # from the `train_op` (which is discarded) and the `loss` Op. To 167 # inspect the values of your Ops or variables, you may include them 168 # in the list passed to sess.run() and the value tensors will be 169 # returned in the tuple from the call. 170 _, loss_value = sess.run([train_op, loss], 171 feed_dict=feed_dict) 172 173 duration = time.time() - start_time 174 175 # Write the summaries and print an overview fairly often. 176 if step % 100 == 0: 177 # Print status to stdout. 178 print(\'Step %d: loss = %.2f (%.3f sec)\' % (step, loss_value, duration)) 179 # Update the events file. 180 summary_str = sess.run(summary, feed_dict=feed_dict) 181 summary_writer.add_summary(summary_str, step) 182 summary_writer.flush() 183 184 # Save a checkpoint and evaluate the model periodically. 185 if (step + 1) % 1000 == 0 or (step + 1) == FLAGS.max_steps: 186 checkpoint_file = os.path.join(FLAGS.log_dir, \'model.ckpt\') 187 saver.save(sess, checkpoint_file, global_step=step) 188 自学tensorflow——1.框架初步了解以及构建简单的计算图计算吴裕雄--天生自然 神经网络人工智能项目:基于深度学习TensorFlow框架的图像分类与目标跟踪报告(续一)