Tensorflow tensorboard使用

Posted 下路派出所

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Tensorflow tensorboard使用相关的知识,希望对你有一定的参考价值。

启动tensorboard 

tensorboard --logdir="log path"    一般是在模型跑完了以后,再进行启动,并且log path要和模型中的路径一样。

 

记录数据和画图

  结构图:

    直接使用以上代码生成一个带可展开符号技术分享图片的一个域,并且支持嵌套操作:

    

with tf.name_scope(layer_name):  
    with tf.name_scope(weights):

 

节点一般是变量或常量,需要加一个“name=‘’”参数,才会展示和命名,如:

with tf.name_scope(weights):  
    Weights = tf.Variable(tf.random_normal([in_size,out_size]))  

 

变量:
变量则可使用Tensorflow.histogram_summary()方法:
tf.histogram_summary(layer_name+"/weights",Weights) #name命名,Weights赋值

技术分享图片

 

常量:
常量则可使用Tensorflow.scalar_summary()方法:
tf.scalar_summary(loss,loss) #命名和赋值  

技术分享图片

 

展示:
最后需要整合和存储SummaryWriter:
#合并到Summary中  
merged = tf.merge_all_summaries()  
#选定可视化存储目录  
writer = tf.train.SummaryWriter("/目录",sess.graph) 

 

merged也是需要run的,因此还需要:

result = sess.run(merged) #merged也是需要run的  
    writer.add_summary(result,i)  
 
执行:
运行后,会在相应的目录里生成一个文件,执行:
tensorboard --logdir="/目录" 

 

下面是示例代码:

技术分享图片
#encoding=utf-8
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets(MNIST_data/, one_hot=True)

def weight_variable(shape):
    initial = tf.truncated_normal(shape, stddev=0.1)
    return tf.Variable(initial)

def bias_variable(shape):
    initial = tf.constant(0.1, shape=shape)
    return tf.Variable(initial)

# scalar用于记录数值,histogram用于记录训练过程中记录的数据的分布图,也可以用scalar来存储

myGraph = tf.Graph()
with myGraph.as_default():
    with tf.name_scope(inputsAndLabels):
        x_raw = tf.placeholder(tf.float32, shape=[None, 784])
        y = tf.placeholder(tf.float32, shape=[None, 10])
    with tf.name_scope(hidden1):
        x = tf.reshape(x_raw, shape=[-1,28,28,1])
        W_conv1 = weight_variable([5,5,1,32])
        b_conv1 = bias_variable([32])
        l_conv1 = tf.nn.relu(tf.nn.conv2d(x,W_conv1, strides=[1,1,1,1],padding=SAME) + b_conv1)
        l_pool1 = tf.nn.max_pool(l_conv1, ksize=[1,2,2,1], strides=[1,2,2,1], padding=SAME)
        tf.summary.image(x_input,x,max_outputs=10)
        tf.summary.histogram(W_con1,W_conv1)
        tf.summary.histogram(b_con1,b_conv1)
    with tf.name_scope(hidden2):
        W_conv2 = weight_variable([5,5,32,64])
        b_conv2 = bias_variable([64])
        l_conv2 = tf.nn.relu(tf.nn.conv2d(l_pool1, W_conv2, strides=[1,1,1,1], padding=SAME)+b_conv2)
        l_pool2 = tf.nn.max_pool(l_conv2, ksize=[1,2,2,1],strides=[1,2,2,1], padding=SAME)
        tf.summary.histogram(W_con2, W_conv2)
        tf.summary.histogram(b_con2, b_conv2)
    with tf.name_scope(fc1):
        W_fc1 = weight_variable([64*7*7, 1024])
        b_fc1 = bias_variable([1024])
        l_pool2_flat = tf.reshape(l_pool2, [-1, 64*7*7])
        l_fc1 = tf.nn.relu(tf.matmul(l_pool2_flat, W_fc1) + b_fc1)
        keep_prob = tf.placeholder(tf.float32)
        # 增加drop,这里存在1024个节点,如果全连接,会造成网络学习到不必要的信息,这是我们不希望看到的。
        l_fc1_drop = tf.nn.dropout(l_fc1, keep_prob)
        tf.summary.histogram(W_fc1, W_fc1)
        tf.summary.histogram(b_fc1, b_fc1)
    with tf.name_scope(fc2):
        W_fc2 = weight_variable([1024, 10])
        b_fc2 = bias_variable([10])
        y_conv = tf.matmul(l_fc1_drop, W_fc2) + b_fc2
        tf.summary.histogram(W_fc1, W_fc1)
        tf.summary.histogram(b_fc1, b_fc1)
    with tf.name_scope(train):
        cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_conv, labels=y))
        # 进行梯度下降
        train_step = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(cross_entropy)
        correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y, 1))
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
        # scalar用于记录数值
        tf.summary.scalar(loss, cross_entropy)
        tf.summary.scalar(accuracy, accuracy)

with tf.Session(graph=myGraph) as sess:
    sess.run(tf.global_variables_initializer())
    saver = tf.train.Saver()
    # 收集所有已经画好的graph
    merged = tf.summary.merge_all()
    # 定义存储的路径,这个路径必须和tensorboard的启动路径一样(先运行程序,保存图,然后在以相同的logdir启动tensorboard),并且加上图.
    summary_writer = tf.summary.FileWriter(/Users/maxiong/Workpace/Tensorboard/log, graph=sess.graph)
    for i in range(1000):
        batch = mnist.train.next_batch(50)
        sess.run(train_step,feed_dict={x_raw:batch[0],y:batch[1],keep_prob:0.5})
        if i%100 == 0:
            train_accuracy = accuracy.eval(feed_dict={x_raw:batch[0], y:batch[1], keep_prob:1.0})
            print(step %d training accuracy:%g%(i, train_accuracy))
            # 获得merged
            summary = sess.run(merged,feed_dict={x_raw:batch[0], y:batch[1], keep_prob:1.0})
            # 利用summary_writer进行存储
            summary_writer.add_summary(summary,i)
    test_accuracy = accuracy.eval(feed_dict={x_raw:mnist.test.images, y:mnist.test.labels, keep_prob:1.0})
    print(test accuracy:%g %test_accuracy)
    saver.save(sess,/Users/maxiong/Workpace/Tensorboard/model,global_step=1)
View Code

 

 

以上是关于Tensorflow tensorboard使用的主要内容,如果未能解决你的问题,请参考以下文章

TensorFlow 1 中的 TensorBoard 使用 Google Colab

警告:tensorflow:`write_grads` 将在 TensorFlow 2.0 中忽略`TensorBoard` 回调

Tensorflow tensorboard使用

tensorflow tensorboard 摘要示例

谷歌 colab 中用于 tensorflow-1.x 的 Tensorboard

解决Tensorflow源码安装的之后TensorBoard 无法使用的问题