TensorFlow官方文档入门笔记[一]

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了TensorFlow官方文档入门笔记[一]相关的知识,希望对你有一定的参考价值。

TensorFlow官方文档入门笔记[一]

张量

3 # a rank 0 tensor; this is a scalar with shape []
[1., 2., 3.] # a rank 1 tensor; this is a vector with shape [3]
[[1., 2., 3.], [4., 5., 6.]] # a rank 2 tensor; a matrix with shape [2, 3]
[[[1., 2., 3.]], [[7., 8., 9.]]] # a rank 3 tensor with shape [2, 1, 3]

 

 

计算图

TF由两个部分组成:构建计算图运行计算图

node1 = tf.constant(3.0, dtype=tf.float32)
node2 = tf.constant(4.0) # also tf.float32 implicitly
print(node1, node2)

输出:

Tensor("Const:0", shape=(), dtype=float32) Tensor("Const_1:0", shape=(), dtype=float32)

必须创建一个Session对象,然后调用其run方法运行计算图。会话封装了TensorFlow运行时的控制和状态。

sess = tf.Session()
print(sess.run([node1, node2]))

输出:

[3.0, 4.0]

 

node3 = tf.add(node1, node2)
print("node3:", node3)
print("sess.run(node3):", sess.run(node3))

输出:

node3: Tensor("Add:0", shape=(), dtype=float32)
sess.run(node3): 7.0

 

a = tf.placeholder(tf.float32)

b = tf.placeholder(tf.float32)

adder_node = a + b  # + provides a shortcut for tf.add(a, b)
print(sess.run(adder_node, {a: 3, b: 4.5}))

print(sess.run(adder_node, {a: [1, 3], b: [2, 4]}))
输出
7.5
[ 3.  7.]

 

add_and_triple = adder_node * 3.
print(sess.run(add_and_triple, {a: 3, b: 4.5}))
输出
22.5

 

变量允许我们向图中添加可训练的参数。它们的构造类型和初始值:

W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
x = tf.placeholder(tf.float32)
linear_model = W * x + b

常数在调用时被初始化tf.constant,其值永远不会改变。相比之下,调用时,变量不会被初始化tf.Variable。要初始化TensorFlow程序中的所有变量:

init = tf.global_variables_initializer()
sess.run(init)

x是占位符评估linear_model几个值

print(sess.run(linear_model, {x: [1, 2, 3, 4]}))
输出
[ 0.          0.30000001  0.60000002  0.90000004]

评估训练模型创建损失函数

y = tf.placeholder(tf.float32)
squared_deltas = tf.square(linear_model - y)
loss = tf.reduce_sum(squared_deltas)
print(sess.run(loss, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]}))
输出
23.66

tf.assign参数值

fixW = tf.assign(W, [-1.])
fixb = tf.assign(b, [1.])
sess.run([fixW, fixb])

print(sess.run(loss, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]}))

   输出

0.0

 

tf.train API

TensorFlow提供了优化器,可以缓慢地更改每个变量,以便最大限度地减少损失函数。最简单的优化器是梯度下降。它根据相对于该变量的损失导数的大小修改每个变量tf.gradients

例如:

optimizer = tf.train.GradientDescentOptimizer(0.01)

train = optimizer.minimize(loss)
sess.run(init) # reset values to incorrect defaults.
for i in range(1000):
  sess.run(train, {x: [1, 2, 3, 4], y: [0, -1, -2, -3]})

print(sess.run([W, b]))

   输出

[array([-0.9999969], dtype=float32), array([ 0.99999082],
dtype=float32)]

 

完成程序

完成的可训练线性回归模型如下:

import tensorflow as tf

# Model parameters
W = tf.Variable([.3], dtype=tf.float32)
b = tf.Variable([-.3], dtype=tf.float32)
# Model input and output
x = tf.placeholder(tf.float32)
linear_model = W * x + b
y = tf.placeholder(tf.float32)

# loss
loss = tf.reduce_sum(tf.square(linear_model - y)) # sum of the squares
# optimizer
optimizer = tf.train.GradientDescentOptimizer(0.01)
train = optimizer.minimize(loss)

# training data
x_train = [1, 2, 3, 4]
y_train = [0, -1, -2, -3]
# training loop
init = tf.global_variables_initializer()
sess = tf.Session()
sess.run(init) # reset values to wrong
for i in range(1000):
sess.run(train, {x: x_train, y: y_train})

# evaluate training accuracy
curr_W, curr_b, curr_loss = sess.run([W, b, loss], {x: x_train, y: y_train})
print("W: %s b: %s loss: %s"%(curr_W, curr_b, curr_loss))
输出
W: [-0.9999969] b: [ 0.99999082] loss: 5.69997e-11

以上是关于TensorFlow官方文档入门笔记[一]的主要内容,如果未能解决你的问题,请参考以下文章

tensorflow笔记:流程,概念和简单代码注释

tensorflow学习笔记一——just get started

TensorFlow入门笔记基本操作

TensorFlow基础笔记 参考资源学习文档

TensorFlow深度学习笔记 Tensorboard入门

Tensorflow学习笔记