Using Tensorflow SavedModel Format to Save and Do Predictions
Posted rhyswang
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Using Tensorflow SavedModel Format to Save and Do Predictions相关的知识,希望对你有一定的参考价值。
We are now trying to deploy our Deep Learning model onto Google Cloud. It is required to use Google Function to trigger the Deep Learning predictions. However, when pre-trained models are stored on cloud, it is impossible to get the exact directory path and restore the tensorflow session like what we did on local machine.
So we turn to use SavedModel, which is quite like a ‘Prediction Mode‘ of tensorflow. According to official turotial: a SavedModel contains a complete TensorFlow program, including weights and computation. It does not require the original model building code to run, which makes it useful for sharing or deploying.
The Definition of our graph, just here to show the input and output tensors:
‘‘‘RNN Model Definition‘‘‘ tf.reset_default_graph() ‘‘‘‘‘‘ #define inputs tf_x = tf.placeholder(tf.float32, [None, window_size,1],name=‘x‘) tf_y = tf.placeholder(tf.int32, [None, 2],name=‘y‘) cells = [tf.keras.layers.LSTMCell(units=n) for n in num_units] stacked_rnn_cell = tf.keras.layers.StackedRNNCells(cells) outputs, (h_c, h_n) = tf.nn.dynamic_rnn( stacked_rnn_cell, # cell you have chosen tf_x, # input initial_state=None, # the initial hidden state dtype=tf.float32, # must given if set initial_state = None time_major=False, # False: (batch, time step, input); True: (time step, batch, input) ) l1 = tf.layers.dense(outputs[:, -1, :],32,activation=tf.nn.relu,name=‘l1‘) l2 = tf.layers.dense(l1,8,activation=tf.nn.relu,name=‘l6‘) pred = tf.layers.dense(l2,2,activation=tf.nn.relu,name=‘pred‘) with tf.name_scope(‘loss‘): cross_entropy = tf.nn.softmax_cross_entropy_with_logits_v2(labels=tf_y, logits=pred) loss = tf.reduce_mean(cross_entropy) tf.summary.scalar("loss",tensor=loss) train_op = tf.train.AdamOptimizer(LR).minimize(loss) accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(tf_y, axis=1), tf.argmax(pred, axis=1)), tf.float32)) init_op = tf.group(tf.global_variables_initializer(), tf.local_variables_initializer()) saver = tf.train.Saver()
Train and Save the model, we use simple_save:
sess = tf.Session() sess.run(init_op) for i in range(0,n): sess.run(train_op,tf_x:batch_X , tf_y:batch_y) ... tf.saved_model.simple_save(sess, ‘simple_save/model‘, inputs="x": tf_x,outputs="pred": pred) sess.close()
Restore and Predict:
with tf.Session(graph=tf.Graph()) as sess: tf.saved_model.loader.load(sess, ["serve"], ‘simple_save_test/model‘) batch = sess.run(‘pred/Relu:0‘,feed_dict=‘x:0‘:dataX.reshape([-1,24,1])) print(batch)
Reference:
medium post: https://medium.com/@jsflo.dev/saving-and-loading-a-tensorflow-model-using-the-savedmodel-api-17645576527
The official tutorial of Tensorflow: https://www.tensorflow.org/guide/saved_model
以上是关于Using Tensorflow SavedModel Format to Save and Do Predictions的主要内容,如果未能解决你的问题,请参考以下文章
How to do sparse input text classification(dnn) using tensorflow
Using Tensorflow SavedModel Format to Save and Do Predictions
(转)Image Segmentation with Tensorflow using CNNs and Conditional Random Fields
TensorFlow by Google 使用排序 APIMachine Learning Foundations: Ep #9 - Using the Sequencing APIs