TensorFlow学习笔记

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了TensorFlow学习笔记相关的知识,希望对你有一定的参考价值。

TensorFlow学习笔记

一、常量表示

import tensorflow as tf
m1=tf.constant([[3,3]])
m2=tf.constant([[2],[3]])
product=tf.matmul(m1,m2)
print(product)
sess=tf.Session()
result=sess.run(product)
print(result)
sess.close()

with tf.Session() as sess:
    result=sess.run(product)
    print(result)

输出结果:
[[15]]

二、变量

import tensorflow as tf
x=tf.Variable([1,2]) #Variable表示变量
a=tf.constant([3,3]) #constant表示常量
#增加一个减法op
sub=tf.subtract(x,a)
#增加一个加法op
add=tf.add(x,sub)
#变量初始化
init=tf.global_variables_initializer();

with tf.Session() as sess:
    sess.run(init)   #变量初始化
    print(sess.run(sub))
    print(sess.run(add))

#创建爱你一个变量,并且初始化为0
state=tf.Variable(0,name=‘counter‘) #name是给变量起名字,0表示变量值为0
#创建一个op,作用是使state加1
new_value=tf.add(state,1)
#赋值op:把后面一个赋值给前面一个
update=tf.assign(state,new_value)
#变量初始化
init=tf.global_variables_initializer()

with tf.Session() as sess:
    sess.run(init)     #变量初始化
    print(sess.run(state))
    for _ in range(5):
        sess.run(update)
        print(sess.run(state))

输出结果:
[-2 -1]
[-1 1]

01
2
3
4
5

三、Fetch and Feed

import tensorflow as tf
#Fetch
#定义常量
input1=tf.constant(3.0)
input2=tf.constant(2.0)
input3=tf.constant(5.0)
#乘法和加法
add=tf.add(input2,input3)
mul=tf.multiply(input1,add)

with tf.Session() as sess:
    result=sess.run([mul,add])
    print(result)

#Feed
#创建占位符
input1=tf.placeholder(tf.float32)
input2=tf.placeholder(tf.float32)
output=tf.multiply(input1,input2)  #乘法

with tf.Session() as sess:
    #feed的数据以字典的形式传入
    print(sess.run(output,feed_dict={input1:[8.],input2:[2.]}))

输出结果:
[21.0, 7.0]

[ 16.]

四、MNIST数据集分类简单脚本

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data

#载入数据集
mnist = input_data.read_data_sets("MNIST_data",one_hot=True) #one_host=True把标签转换为0或1的格式

#定义两个变量为每个批次的大小,每一次放入多少张图片
batch_size = 100
#计算一共有多少个批次
n_batch = mnist.train.num_examples // batch_size

#定义两个placeholder
x = tf.placeholder(tf.float32,[None,784])
y = tf.placeholder(tf.float32,[None,10])

#创建一个简单的神经网络
W = tf.Variable(tf.zeros([784,10]))
b = tf.Variable(tf.zeros([10]))
prediction = tf.nn.softmax(tf.matmul(x,W)+b)

#二次代价函数
loss = tf.reduce_mean(tf.square(y-prediction))
#使用梯度下降法
train_step = tf.train.GradientDescentOptimizer(0.2).minimize(loss)

#初始化变量
init = tf.global_variables_initializer()

#结果存放在一个布尔型列表中
correct_prediction = tf.equal(tf.argmax(y,1),tf.argmax(prediction,1))#argmax返回一维张量中最大的值所在的位置
#求准确率
accuracy = tf.reduce_mean(tf.cast(correct_prediction,tf.float32))

with tf.Session() as sess:
    sess.run(init)
    for epoch in range(21):
        for batch in range(n_batch):
            batch_xs,batch_ys =  mnist.train.next_batch(batch_size)
            sess.run(train_step,feed_dict={x:batch_xs,y:batch_ys})

        acc = sess.run(accuracy,feed_dict={x:mnist.test.images,y:mnist.test.labels})
        print("Iter " + str(epoch) + ",Testing Accuracy " + str(acc))

输出结果:
Extracting MNIST_data/train-images-idx3-ubyte.gz
Extracting MNIST_data/train-labels-idx1-ubyte.gz
Extracting MNIST_data/t10k-images-idx3-ubyte.gz
Extracting MNIST_data/t10k-labels-idx1-ubyte.gz
Iter 0,Testing Accuracy 0.8302
Iter 1,Testing Accuracy 0.8711
Iter 2,Testing Accuracy 0.8821
Iter 3,Testing Accuracy 0.8881
Iter 4,Testing Accuracy 0.8944
Iter 5,Testing Accuracy 0.8965
Iter 6,Testing Accuracy 0.8997
Iter 7,Testing Accuracy 0.9005
Iter 8,Testing Accuracy 0.9039
Iter 9,Testing Accuracy 0.9046
Iter 10,Testing Accuracy 0.9061
Iter 11,Testing Accuracy 0.9071
Iter 12,Testing Accuracy 0.9073
Iter 13,Testing Accuracy 0.9089
Iter 14,Testing Accuracy 0.9101
Iter 15,Testing Accuracy 0.9106
Iter 16,Testing Accuracy 0.9118
Iter 17,Testing Accuracy 0.9124
Iter 18,Testing Accuracy 0.9127
Iter 19,Testing Accuracy 0.9126
Iter 20,Testing Accuracy 0.9139

以上是关于TensorFlow学习笔记的主要内容,如果未能解决你的问题,请参考以下文章

TensorFlow学习笔记

学习笔记:python3,代码片段(2017)

资源 | 数十种TensorFlow实现案例汇集:代码+笔记

tensorflow学习笔记:实现自编码器

Tensorflow学习笔记(对MNIST经典例程的)的代码注释与理解

Tensorflow学习笔记一