Tensorflow细节-P80-深度神经网络
Posted liuboblog
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Tensorflow细节-P80-深度神经网络相关的知识,希望对你有一定的参考价值。
1、本节多为复习内容,从以下图片可见一般:
2、学会使用
from numpy.random import RandomState
然后
rdm = RandomState(1)
dataset_size = 128
X = rdm.rand(dataset_size, 2)
Y = [[(x1 + x2) + rdm.rand() / 10.0-0.05] for(x1, x2) in X]
进行赋值的时候就可以不变了
import tensorflow as tf
from numpy.random import RandomState
batch_size=8
with tf.name_scope("inputs"):
xs = tf.placeholder(tf.float32, [None, 2], name="xs")
ys = tf.placeholder(tf.float32, [None, 1], name="ys")
with tf.variable_scope("get_variable"):
w1 = tf.get_variable("w1", [2, 1], tf.float32, tf.truncated_normal_initializer(seed=1))
b1 = tf.get_variable("b1", [1], tf.float32, tf.zeros_initializer())
with tf.name_scope("op"):
y = tf.matmul(xs, w1) + b1
with tf.name_scope("loss_op"):
loss = tf.reduce_mean(tf.where(tf.greater(ys, y), (ys-y)*1, (y-ys)*10))
tf.summary.scalar("loss", loss)
train_step = tf.train.GradientDescentOptimizer(0.001).minimize(loss)
rdm = RandomState(1)
dataset_size = 128
X = rdm.rand(dataset_size, 2)
Y = [[(x1 + x2) + rdm.rand() / 10.0-0.05] for(x1, x2) in X]
merged = tf.summary.merge_all()
with tf.Session() as sess:
writer = tf.summary.FileWriter("path/", graph=tf.get_default_graph())
tf.global_variables_initializer().run()
for i in range (5000):
start = i*batch_size % dataset_size
end = min((i+1)*batch_size% dataset_size, dataset_size)
train_op = sess.run(train_step, feed_dict=xs: X, ys: Y)
if i % 100 == 0:
result, losses = sess.run([merged, loss], feed_dict=xs: X, ys: Y)
print("After %d , loss is %g" % (i, losses))
writer.add_summary(result, i)
writer.close()
以上是关于Tensorflow细节-P80-深度神经网络的主要内容,如果未能解决你的问题,请参考以下文章
Python TensorFlow实现Sequential深度神经网络回归
TensorFlow/Sklearn 深度神经网络分类器类型错误
如何在 TensorFlow 1.4 中使用提前停止来训练深度神经网络?
原创 深度学习与TensorFlow 动手实践系列 - 3第三课:卷积神经网络 - 基础篇