如何使用 Tensorflow 进行批量标准化推理?

Posted

技术标签:

【中文标题】如何使用 Tensorflow 进行批量标准化推理?【英文标题】:How does one do Inference with Batch Normalization with Tensor Flow? 【发布时间】:2016-11-13 17:43:35 【问题描述】:

我正在阅读 BN 上的 the original paper 和 How could I use Batch Normalization in TensorFlow? 上的堆栈溢出问题,它提供了一段非常有用的代码,可以将批处理规范化块插入神经网络,但没有提供关于如何实际操作的足够指导 在训练、推理和评估模型时使用它

例如,我想跟踪训练期间的训练误差和测试误差,以确保我不会过拟合。很明显,在测试期间应该关闭批量归一化块,但是在评估训练集上的错误时,是否也应该关闭批量归一化块?我的主要问题是:

    在推理和错误评估期间,是否应该关闭 不管数据集? 这是否意味着批量标准化块应该训练步骤期间开启?

为了清楚起见,我将根据我对正确做法的理解,提供我一直用于使用 Tensor 流运行批量标准化的(简化的)代码:

## TRAIN
if phase_train is not None:
    #DO BN
    feed_dict_train = x:X_train, y_:Y_train, phase_train: False
    feed_dict_cv = x:X_cv, y_:Y_cv, phase_train: False
    feed_dict_test = x:X_test, y_:Y_test, phase_train: False
else:
    #Don't do BN
    feed_dict_train = x:X_train, y_:Y_train
    feed_dict_cv = x:X_cv, y_:Y_cv
    feed_dict_test = x:X_test, y_:Y_test

def get_batch_feed(X, Y, M, phase_train):
    mini_batch_indices = np.random.randint(M,size=M)
    Xminibatch =  X[mini_batch_indices,:] # ( M x D^(0) )
    Yminibatch = Y[mini_batch_indices,:] # ( M x D^(L) )
    if phase_train is not None:
        #DO BN
        feed_dict = x: Xminibatch, y_: Yminibatch, phase_train: True
    else:
        #Don't do BN
        feed_dict = x: Xminibatch, y_: Yminibatch
    return feed_dict

with tf.Session() as sess:
    sess.run( tf.initialize_all_variables() )
    for iter_step in xrange(steps):
        feed_dict_batch = get_batch_feed(X_train, Y_train, M, phase_train)
        # Collect model statistics
        if iter_step%report_error_freq == 0:
            train_error = sess.run(fetches=l2_loss, feed_dict=feed_dict_train)
            cv_error = sess.run(fetches=l2_loss, feed_dict=feed_dict_cv)
            test_error = sess.run(fetches=l2_loss, feed_dict=feed_dict_test)

            do_stuff_with_errors(train_error, cv_error, test_error)
        # Run Train Step
        sess.run(fetches=train_step, feed_dict=feed_dict_batch)

我用来生成批量标准化块的代码是:

def standard_batch_norm(l, x, n_out, phase_train, scope='BN'):
    """
    Batch normalization on feedforward maps.
    Args:
        x:           Vector
        n_out:       integer, depth of input maps
        phase_train: boolean tf.Varialbe, true indicates training phase
        scope:       string, variable scope
    Return:
        normed:      batch-normalized maps
    """
    with tf.variable_scope(scope+l):
        #beta = tf.Variable(tf.constant(0.0, shape=[n_out], dtype=tf.float64 ), name='beta', trainable=True, dtype=tf.float64 )
        #gamma = tf.Variable(tf.constant(1.0, shape=[n_out],dtype=tf.float64 ), name='gamma', trainable=True, dtype=tf.float64 )
        init_beta = tf.constant(0.0, shape=[n_out], dtype=tf.float64)
        init_gamma = tf.constant(1.0, shape=[n_out],dtype=tf.float64)
        beta = tf.get_variable(name='beta'+l, dtype=tf.float64, initializer=init_beta, regularizer=None, trainable=True)
        gamma = tf.get_variable(name='gamma'+l, dtype=tf.float64, initializer=init_gamma, regularizer=None, trainable=True)
        batch_mean, batch_var = tf.nn.moments(x, [0], name='moments')
        ema = tf.train.ExponentialMovingAverage(decay=0.5)

        def mean_var_with_update():
            ema_apply_op = ema.apply([batch_mean, batch_var])
            with tf.control_dependencies([ema_apply_op]):
                return tf.identity(batch_mean), tf.identity(batch_var)

        mean, var = tf.cond(phase_train, mean_var_with_update, lambda: (ema.average(batch_mean), ema.average(batch_var)))
        normed = tf.nn.batch_normalization(x, mean, var, beta, gamma, 1e-3)
    return normed

【问题讨论】:

出于好奇,为什么不使用“官方”批规范层:github.com/tensorflow/tensorflow/blob/… 我还没有深入研究这个问题,但据我从文档中看到的,您只需在这个 batch_norm 层中使用二进制参数 is_training,并将其设置为 true 仅用于训练阶段。跨度> @MaximHaytovich 我什至不知道它的存在,如果你去他们的 API (tensorflow.org/versions/r0.9/api_docs/python/…) 甚至没有提到 BN,你是怎么找到的?我很震惊之前没有人谈论它。 @MaximHaytovich 我的印象是另一个 SO 上提供的代码是在 TensorFlow 中使用 BN 的唯一方法,我想我错了,SO 帖子已经过时了,对吧? 好吧...我用谷歌搜索了它:) 很可能它没有在 API 中提及,因为它包含在尚未发布或类似的版本中。但是试试看,在这里发布结果。我现在将其发布为答案 【参考方案1】:

我发现 tensorflow 中有“官方”batch_norm 层。试试看:

https://github.com/tensorflow/tensorflow/blob/b826b79718e3e93148c3545e7aa3f90891744cc0/tensorflow/contrib/layers/python/layers/layers.py#L100

文档中很可能没有提到它,因为它仅包含在某些 RC 或“测试版”版本中。

我还没有深入研究过这个问题,但据我从文档中看到的,您只是在这个 batch_norm 层中使用二进制参数 is_training,并且仅在训练阶段将其设置为 true。试试看。

更新:下面是加载数据、构建具有一个隐藏 ReLU 层和 L2 归一化的网络并为隐藏层和外层引入批量归一化的代码。这运行良好,训练良好。

# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import numpy as np
import tensorflow as tf
from six.moves import cPickle as pickle

pickle_file = '/home/maxkhk/Documents/Udacity/DeepLearningCourse/SourceCode/tensorflow/examples/udacity/notMNIST.pickle'

with open(pickle_file, 'rb') as f:
  save = pickle.load(f)
  train_dataset = save['train_dataset']
  train_labels = save['train_labels']
  valid_dataset = save['valid_dataset']
  valid_labels = save['valid_labels']
  test_dataset = save['test_dataset']
  test_labels = save['test_labels']
  del save  # hint to help gc free up memory
  print('Training set', train_dataset.shape, train_labels.shape)
  print('Validation set', valid_dataset.shape, valid_labels.shape)
  print('Test set', test_dataset.shape, test_labels.shape)

image_size = 28
num_labels = 10

def reformat(dataset, labels):
  dataset = dataset.reshape((-1, image_size * image_size)).astype(np.float32)
  # Map 2 to [0.0, 1.0, 0.0 ...], 3 to [0.0, 0.0, 1.0 ...]
  labels = (np.arange(num_labels) == labels[:,None]).astype(np.float32)
  return dataset, labels
train_dataset, train_labels = reformat(train_dataset, train_labels)
valid_dataset, valid_labels = reformat(valid_dataset, valid_labels)
test_dataset, test_labels = reformat(test_dataset, test_labels)
print('Training set', train_dataset.shape, train_labels.shape)
print('Validation set', valid_dataset.shape, valid_labels.shape)
print('Test set', test_dataset.shape, test_labels.shape)


def accuracy(predictions, labels):
  return (100.0 * np.sum(np.argmax(predictions, 1) == np.argmax(labels, 1))
          / predictions.shape[0])


#for NeuralNetwork model code is below
#We will use SGD for training to save our time. Code is from Assignment 2
#beta is the new parameter - controls level of regularization.
#Feel free to play with it - the best one I found is 0.001
#notice, we introduce L2 for both biases and weights of all layers

batch_size = 128
beta = 0.001

#building tensorflow graph
graph = tf.Graph()
with graph.as_default():
      # Input data. For the training data, we use a placeholder that will be fed
  # at run time with a training minibatch.
  tf_train_dataset = tf.placeholder(tf.float32,
                                    shape=(batch_size, image_size * image_size))
  tf_train_labels = tf.placeholder(tf.float32, shape=(batch_size, num_labels))
  tf_valid_dataset = tf.constant(valid_dataset)
  tf_test_dataset = tf.constant(test_dataset)

  #introduce batchnorm
  tf_train_dataset_bn = tf.contrib.layers.batch_norm(tf_train_dataset)


  #now let's build our new hidden layer
  #that's how many hidden neurons we want
  num_hidden_neurons = 1024
  #its weights
  hidden_weights = tf.Variable(
    tf.truncated_normal([image_size * image_size, num_hidden_neurons]))
  hidden_biases = tf.Variable(tf.zeros([num_hidden_neurons]))

  #now the layer itself. It multiplies data by weights, adds biases
  #and takes ReLU over result
  hidden_layer = tf.nn.relu(tf.matmul(tf_train_dataset_bn, hidden_weights) + hidden_biases)

  #adding the batch normalization layerhi()
  hidden_layer_bn = tf.contrib.layers.batch_norm(hidden_layer)

  #time to go for output linear layer
  #out weights connect hidden neurons to output labels
  #biases are added to output labels  
  out_weights = tf.Variable(
    tf.truncated_normal([num_hidden_neurons, num_labels]))  

  out_biases = tf.Variable(tf.zeros([num_labels]))  

  #compute output  
  out_layer = tf.matmul(hidden_layer_bn,out_weights) + out_biases
  #our real output is a softmax of prior result
  #and we also compute its cross-entropy to get our loss
  #Notice - we introduce our L2 here
  loss = (tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
    out_layer, tf_train_labels) +
    beta*tf.nn.l2_loss(hidden_weights) +
    beta*tf.nn.l2_loss(hidden_biases) +
    beta*tf.nn.l2_loss(out_weights) +
    beta*tf.nn.l2_loss(out_biases)))

  #now we just minimize this loss to actually train the network
  optimizer = tf.train.GradientDescentOptimizer(0.5).minimize(loss)

  #nice, now let's calculate the predictions on each dataset for evaluating the
  #performance so far
  # Predictions for the training, validation, and test data.
  train_prediction = tf.nn.softmax(out_layer)
  valid_relu = tf.nn.relu(  tf.matmul(tf_valid_dataset, hidden_weights) + hidden_biases)
  valid_prediction = tf.nn.softmax( tf.matmul(valid_relu, out_weights) + out_biases) 

  test_relu = tf.nn.relu( tf.matmul( tf_test_dataset, hidden_weights) + hidden_biases)
  test_prediction = tf.nn.softmax(tf.matmul(test_relu, out_weights) + out_biases)



#now is the actual training on the ANN we built
#we will run it for some number of steps and evaluate the progress after 
#every 500 steps

#number of steps we will train our ANN
num_steps = 3001

#actual training
with tf.Session(graph=graph) as session:
  tf.initialize_all_variables().run()
  print("Initialized")
  for step in range(num_steps):
    # Pick an offset within the training data, which has been randomized.
    # Note: we could use better randomization across epochs.
    offset = (step * batch_size) % (train_labels.shape[0] - batch_size)
    # Generate a minibatch.
    batch_data = train_dataset[offset:(offset + batch_size), :]
    batch_labels = train_labels[offset:(offset + batch_size), :]
    # Prepare a dictionary telling the session where to feed the minibatch.
    # The key of the dictionary is the placeholder node of the graph to be fed,
    # and the value is the numpy array to feed to it.
    feed_dict = tf_train_dataset : batch_data, tf_train_labels : batch_labels
    _, l, predictions = session.run(
      [optimizer, loss, train_prediction], feed_dict=feed_dict)
    if (step % 500 == 0):
      print("Minibatch loss at step %d: %f" % (step, l))
      print("Minibatch accuracy: %.1f%%" % accuracy(predictions, batch_labels))
      print("Validation accuracy: %.1f%%" % accuracy(
        valid_prediction.eval(), valid_labels))
      print("Test accuracy: %.1f%%" % accuracy(test_prediction.eval(), test_labels))

【讨论】:

感谢帮助 我去看看官方的BN。但是,如果您有时间编写一个使用它的示例以及实际回答我最初问题的内容,我很乐意给您一个赏金:) 我在这里提供了如何使用“官方”方式使用 BN 的答案:***.com/questions/33949786/…。如果您想在那里查看并纠正它,那就太棒了。我还在那里提供了赏金,所以如果您想提供更正或您自己的答案,我很乐意将其奖励给您。 :) @Pinocchio 更新了我的答案以包含神经网络构建和训练的完整示例 @Pinocchio 也对您提到的问题发布了相同的答案,因为似乎该问题是人们在搜索“tensorflow 批量标准化”时从谷歌得到的第一个问题

以上是关于如何使用 Tensorflow 进行批量标准化推理?的主要内容,如果未能解决你的问题,请参考以下文章

PyTorch:如何批量进行推理(并行推理)

Tensorflow v1 对象检测 api mask_rcnn_inception_v2_coco 模型批量推理

tf-slim 批量规范:训练/推理模式之间的不同行为

如何在 TensorFlow 中使用批量标准化?

张量流中推理时的批量标准化

具有推理功能的 TensorFlow + Keras 多 GPU 模型