tensorflow基础模型之KMeans算法
Posted hyhy904
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了tensorflow基础模型之KMeans算法相关的知识,希望对你有一定的参考价值。
tensorflow执行KMeans算法。
代码如下:
from __future__ import print_function
?
# Ignore all GPUs, tf random forest does not benefit from it.
import os
?
import numpy as np
import tensorflow as tf
from tensorflow.contrib.factorization import KMeans
?
os.environ["CUDA_VISIBLE_DEVICES"] = ""
?
# 导入MNIST数据
from tensorflow.examples.tutorials.mnist import input_data
?
mnist = input_data.read_data_sets("./tmp/data/", one_hot=True)
full_data_x = mnist.train.images
?
# 参数
num_steps = 50 # 总训练步数
batch_size = 1024 # 批处理数量
k = 25 # 集群数
num_classes = 10 # 10类数字
num_features = 784 # 28*28=>784类特征
?
# 图像输入
X = tf.placeholder(tf.float32, shape=[None, num_features])
# 标签(分配一个标签给中心质点、测试)
Y = tf.placeholder(tf.float32, shape=[None, num_classes])
?
# K-Means 参数
kmeans = KMeans(inputs=X, num_clusters=k, distance_metric=‘cosine‘,
use_mini_batch=True)
?
# 建立 KMeans 图
training_graph = kmeans.training_graph()
?
if len(training_graph) > 6: # Tensorflow 1.4+
(all_scores, cluster_idx, scores, cluster_centers_initialized,
cluster_centers_var, init_op, train_op) = training_graph
else:
(all_scores, cluster_idx, scores, cluster_centers_initialized,
init_op, train_op) = training_graph
?
cluster_idx = cluster_idx[0] # fix for cluster_idx being a tuple
avg_distance = tf.reduce_mean(scores)
?
# 初始化参数
init_vars = tf.global_variables_initializer()
?
# 开启TensorFlow会话
sess = tf.Session(http://www.my516.com)
?
# 初始化
sess.run(init_vars, feed_dict=X: full_data_x)
sess.run(init_op, feed_dict=X: full_data_x)
?
# 训练
for i in range(1, num_steps + 1):
_, d, idx = sess.run([train_op, avg_distance, cluster_idx],
feed_dict=X: full_data_x)
if i % 10 == 0 or i == 1:
print("Step %i, Avg Distance: %f" % (i, d))
?
# 分配表前给每个质点
# 用每轮训练的到理它们最近的质点标签样本计算每个质点的总标签数(‘idx‘)
counts = np.zeros(shape=(k, num_classes))
for i in range(len(idx)):
counts[idx[i]] += mnist.train.labels[i]
# 分配频率最高的标签给质点
labels_map = [np.argmax(c) for c in counts]
labels_map = tf.convert_to_tensor(labels_map)
?
# 评估
# centroid_id -> label
cluster_label = tf.nn.embedding_lookup(labels_map, cluster_idx)
# 计算准确率
correct_prediction = tf.equal(cluster_label, tf.cast(tf.argmax(Y, 1), tf.int32))
accuracy_op = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
?
# 测试模型
test_x, test_y = mnist.test.images, mnist.test.labels
print("Test Accuracy:", sess.run(accuracy_op, feed_dict=X: test_x, Y: test_y))
运行结果:
Step 1, Avg Distance: 0.341471
Step 10, Avg Distance: 0.221609
Step 20, Avg Distance: 0.220328
Step 30, Avg Distance: 0.219776
Step 40, Avg Distance: 0.219419
Step 50, Avg Distance: 0.219154
Test Accuracy: 0.7127
---------------------
以上是关于tensorflow基础模型之KMeans算法的主要内容,如果未能解决你的问题,请参考以下文章
tensorflow基础模型之RandomForest(随机森林)算法
基于BP神经网络kmeans聚类和HC模型的火焰特征数据识别算法matlab仿真