TensorflowMac上使用Tensorflow训练实例

Posted Taily老段

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了TensorflowMac上使用Tensorflow训练实例相关的知识,希望对你有一定的参考价值。

Mac上直接使用Anaconda来安装Tensorflow,很简单;

#coding:utf-8

import tensorflow as tf
import numpy as np
BATCH_SIZE = 8
seed = 23455

rng = np.random.RandomState(seed)
X = rng.rand(32,2)

Y = [[int(x0 + x1 < 1)] for (x0, x1) in X]

print "X:\\n", X
print "Y:\\n", Y

x = tf.placeholder(tf.float32, shape=(None,2))
y_= tf.placeholder(tf.float32, shape=(None,1))

w1 = tf.Variable(tf.random_normal([2,3], stddev=1, seed=1))
w2 = tf.Variable(tf.random_normal([3,1], stddev=1, seed=1))

a = tf.matmul(x, w1)
y = tf.matmul(a, w2)

loss = tf.reduce_mean(tf.square(y-y_))
#train_step = tf.train.GradientDescentOptimizer(0.001).minimize(loss)
train_step = tf.train.MomentumOptimizer(0.001,0.9).minimize(loss)

with tf.Session() as sess:
	init_op = tf.global_variables_initializer()
	sess.run(init_op)
	print "w1:\\n", sess.run(w1)
	print "w2:\\n", sess.run(w2)
	print "\\n"

	STEPS = 30000
	for i in range(STEPS):
		start = (i*BATCH_SIZE) % 32
		end = start + BATCH_SIZE
		sess.run(train_step, feed_dict=x:X[start:end], y_:Y[start:end])
		if i % 500 == 0:
			total_loss = sess.run(loss, feed_dict=x:X, y_:Y)
			print("After %d training step(s), loss on all data is %g" % (i,total_loss))

	print "\\n"
	print "w1:\\n", sess.run(w1)
	print "w2:\\n", sess.run(w2)

训练结果:

Last login: Sat Dec 29 15:18:05 on ttys000
/Users/taily/.anaconda/navigator/a.tool ; exit;
bogon:~ taily$ /Users/taily/.anaconda/navigator/a.tool ; exit;
(py27tf) bash-3.2$ which python
/anaconda2/envs/py27tf/bin/python
(py27tf) bash-3.2$ python
Python 2.7.15 |Anaconda, Inc.| (default, Dec 14 2018, 13:10:39) 
[GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import tensorflow as tf
>>> tf.__version__
'1.12.0'
>>> quit()
(py27tf) bash-3.2$ cd /Users/taily
(py27tf) bash-3.2$ cd tf
(py27tf) bash-3.2$ python tf3_9_conv2.py
X:
[[0.83494319 0.11482951]
 [0.66899751 0.46594987]
 [0.60181666 0.58838408]
 [0.31836656 0.20502072]
 [0.87043944 0.02679395]
 [0.41539811 0.43938369]
 [0.68635684 0.24833404]
 [0.97315228 0.68541849]
 [0.03081617 0.89479913]
 [0.24665715 0.28584862]
 [0.31375667 0.47718349]
 [0.56689254 0.77079148]
 [0.7321604  0.35828963]
 [0.15724842 0.94294584]
 [0.34933722 0.84634483]
 [0.50304053 0.81299619]
 [0.23869886 0.9895604 ]
 [0.4636501  0.32531094]
 [0.36510487 0.97365522]
 [0.73350238 0.83833013]
 [0.61810158 0.12580353]
 [0.59274817 0.18779828]
 [0.87150299 0.34679501]
 [0.25883219 0.50002932]
 [0.75690948 0.83429824]
 [0.29316649 0.05646578]
 [0.10409134 0.88235166]
 [0.06727785 0.57784761]
 [0.38492705 0.48384792]
 [0.69234428 0.19687348]
 [0.42783492 0.73416985]
 [0.09696069 0.04883936]]
Y:
[[1], [0], [0], [1], [1], [1], [1], [0], [1], [1], [1], [0], [0], [0], [0], [0], [0], [1], [0], [0], [1], [1], [0], [1], [0], [1], [1], [1], [1], [1], [0], [1]]
2018-12-29 15:57:21.540809: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
2018-12-29 15:57:21.542147: I tensorflow/core/common_runtime/process_util.cc:69] Creating new thread pool with default inter op setting: 8. Tune using inter_op_parallelism_threads for best performance.
w1:
[[-0.8113182   1.4845988   0.06532937]
 [-2.4427042   0.0992484   0.5912243 ]]
w2:
[[-0.8113182 ]
 [ 1.4845988 ]
 [ 0.06532937]]


After 0 training step(s), loss on all data is 5.13118
After 500 training step(s), loss on all data is 0.384391
After 1000 training step(s), loss on all data is 0.383592
After 1500 training step(s), loss on all data is 0.383562
After 2000 training step(s), loss on all data is 0.383561
After 2500 training step(s), loss on all data is 0.383561
After 3000 training step(s), loss on all data is 0.383561
After 3500 training step(s), loss on all data is 0.383561
After 4000 training step(s), loss on all data is 0.383561
After 4500 training step(s), loss on all data is 0.383561
After 5000 training step(s), loss on all data is 0.383561
After 5500 training step(s), loss on all data is 0.383561
After 6000 training step(s), loss on all data is 0.383561
After 6500 training step(s), loss on all data is 0.383561
After 7000 training step(s), loss on all data is 0.383561
After 7500 training step(s), loss on all data is 0.383561
After 8000 training step(s), loss on all data is 0.383561
After 8500 training step(s), loss on all data is 0.383561
After 9000 training step(s), loss on all data is 0.383561
After 9500 training step(s), loss on all data is 0.383561
After 10000 training step(s), loss on all data is 0.383561
After 10500 training step(s), loss on all data is 0.383561
After 11000 training step(s), loss on all data is 0.383561
After 11500 training step(s), loss on all data is 0.383561
After 12000 training step(s), loss on all data is 0.383561
After 12500 training step(s), loss on all data is 0.383561
After 13000 training step(s), loss on all data is 0.383561
After 13500 training step(s), loss on all data is 0.383561
After 14000 training step(s), loss on all data is 0.383561
After 14500 training step(s), loss on all data is 0.383561
After 15000 training step(s), loss on all data is 0.383561
After 15500 training step(s), loss on all data is 0.383561
After 16000 training step(s), loss on all data is 0.383561
After 16500 training step(s), loss on all data is 0.383561
After 17000 training step(s), loss on all data is 0.383561
After 17500 training step(s), loss on all data is 0.383561
After 18000 training step(s), loss on all data is 0.383561
After 18500 training step(s), loss on all data is 0.383561
After 19000 training step(s), loss on all data is 0.383561
After 19500 training step(s), loss on all data is 0.383561
After 20000 training step(s), loss on all data is 0.383561
After 20500 training step(s), loss on all data is 0.383561
After 21000 training step(s), loss on all data is 0.383561
After 21500 training step(s), loss on all data is 0.383561
After 22000 training step(s), loss on all data is 0.383561
After 22500 training step(s), loss on all data is 0.383561
After 23000 training step(s), loss on all data is 0.383561
After 23500 training step(s), loss on all data is 0.383561
After 24000 training step(s), loss on all data is 0.383561
After 24500 training step(s), loss on all data is 0.383561
After 25000 training step(s), loss on all data is 0.383561
After 25500 training step(s), loss on all data is 0.383561
After 26000 training step(s), loss on all data is 0.383561
After 26500 training step(s), loss on all data is 0.383561
After 27000 training step(s), loss on all data is 0.383561
After 27500 training step(s), loss on all data is 0.383561
After 28000 training step(s), loss on all data is 0.383561
After 28500 training step(s), loss on all data is 0.383561
After 29000 training step(s), loss on all data is 0.383561
After 29500 training step(s), loss on all data is 0.383561


w1:
[[-0.6103307   0.8320429   0.07488168]
 [-2.2522218  -0.14554326  0.5666249 ]]
w2:
[[-0.10472749]
 [ 0.772734  ]
 [-0.04402315]]
(py27tf) bash-3.2$ 

以上是关于TensorflowMac上使用Tensorflow训练实例的主要内容,如果未能解决你的问题,请参考以下文章

TensorflowMac上Tensorflow卷积与反卷积

TensorflowMac上Tensorflow卷积与反卷积

tensorflow -----AttributeError: module ‘tensorflo

Windows下Pycharm安装Tensorflow:ERROR: Could not find a version that satisfies the requirement tensorflo

在 Ubuntu 上安装 TensorFlow (官方文档的翻译)

tensorflow 入门