张量流中的稀疏自动编码器成本函数
Posted
技术标签:
【中文标题】张量流中的稀疏自动编码器成本函数【英文标题】:sparse autoencoder cost function in tensorflow 【发布时间】:2017-07-13 05:50:12 【问题描述】:我已经阅读了各种 TensorFlow 教程,试图熟悉它的工作原理;我对使用自动编码器产生了兴趣。
我从使用 Tensorflow 模型存储库中的模型自动编码器开始:
https://github.com/tensorflow/models/tree/master/autoencoder
我让它工作了,在可视化权重时,预计会看到如下内容:
但是,我的自动编码器给了我看起来很垃圾的权重(尽管准确地重新创建了输入图像)。
进一步阅读表明我缺少的是我的自动编码器不是稀疏的,所以我需要对权重强制执行稀疏成本。
我尝试在原始代码中添加稀疏成本(基于此示例3),但它似乎并没有将权重更改为看起来像模型的。
如何正确更改成本以获得看起来像在自动编码的 MNIST 数据集中通常可以找到的特征?我修改后的模型在这里:
import numpy as np
import random
import math
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import matplotlib.pyplot as plt
def xavier_init(fan_in, fan_out, constant = 1):
low = -constant * np.sqrt(6.0 / (fan_in + fan_out))
high = constant * np.sqrt(6.0 / (fan_in + fan_out))
return tf.random_uniform((fan_in, fan_out), minval = low, maxval = high, dtype = tf.float32)
class AdditiveGaussianNoiseAutoencoder(object):
def __init__(self, n_input, n_hidden, transfer_function = tf.nn.sigmoid, optimizer = tf.train.AdamOptimizer(),
scale = 0.1):
self.n_input = n_input
self.n_hidden = n_hidden
self.transfer = transfer_function
self.scale = tf.placeholder(tf.float32)
self.training_scale = scale
network_weights = self._initialize_weights()
self.weights = network_weights
self.sparsity_level= 0.1#np.repeat([0.05], self.n_hidden).astype(np.float32)
self.sparse_reg = 10
# model
self.x = tf.placeholder(tf.float32, [None, self.n_input])
self.hidden = self.transfer(tf.add(tf.matmul(self.x + scale * tf.random_normal((n_input,)),
self.weights['w1']),
self.weights['b1']))
self.reconstruction = tf.add(tf.matmul(self.hidden, self.weights['w2']), self.weights['b2'])
# cost
self.cost = 0.5 * tf.reduce_sum(tf.pow(tf.subtract(self.reconstruction, self.x), 2.0)) + self.sparse_reg \
* self.kl_divergence(self.sparsity_level, self.hidden)
self.optimizer = optimizer.minimize(self.cost)
init = tf.global_variables_initializer()
self.sess = tf.Session()
self.sess.run(init)
def _initialize_weights(self):
all_weights = dict()
all_weights['w1'] = tf.Variable(xavier_init(self.n_input, self.n_hidden))
all_weights['b1'] = tf.Variable(tf.zeros([self.n_hidden], dtype = tf.float32))
all_weights['w2'] = tf.Variable(tf.zeros([self.n_hidden, self.n_input], dtype = tf.float32))
all_weights['b2'] = tf.Variable(tf.zeros([self.n_input], dtype = tf.float32))
return all_weights
def partial_fit(self, X):
cost, opt = self.sess.run((self.cost, self.optimizer), feed_dict = self.x: X,
self.scale: self.training_scale
)
return cost
def kl_divergence(self, p, p_hat):
return tf.reduce_mean(p * tf.log(p) - p * tf.log(p_hat) + (1 - p) * tf.log(1 - p) - (1 - p) * tf.log(1 - p_hat))
def calc_total_cost(self, X):
return self.sess.run(self.cost, feed_dict = self.x: X,
self.scale: self.training_scale
)
def transform(self, X):
return self.sess.run(self.hidden, feed_dict = self.x: X,
self.scale: self.training_scale
)
def generate(self, hidden = None):
if hidden is None:
hidden = np.random.normal(size = self.weights["b1"])
return self.sess.run(self.reconstruction, feed_dict = self.hidden: hidden)
def reconstruct(self, X):
return self.sess.run(self.reconstruction, feed_dict = self.x: X,
self.scale: self.training_scale
)
def getWeights(self):
return self.sess.run(self.weights['w1'])
def getBiases(self):
return self.sess.run(self.weights['b1'])
mnist = input_data.read_data_sets('MNIST_data', one_hot = True)
def get_random_block_from_data(data, batch_size):
start_index = np.random.randint(0, len(data) - batch_size)
return data[start_index:(start_index + batch_size)]
X_train = mnist.train.images
X_test = mnist.test.images
n_samples = int(mnist.train.num_examples)
training_epochs = 50
batch_size = 128
display_step = 1
autoencoder = AdditiveGaussianNoiseAutoencoder(n_input = 784,
n_hidden = 200,
transfer_function = tf.nn.sigmoid,
optimizer = tf.train.GradientDescentOptimizer(learning_rate = 0.01),
scale = 0.01)
for epoch in range(training_epochs):
avg_cost = 0.
total_batch = int(n_samples / batch_size)
# Loop over all batches
for i in range(total_batch):
batch_xs = get_random_block_from_data(X_train, batch_size)
# Fit training using batch data
cost = autoencoder.partial_fit(batch_xs)
# Compute average loss
avg_cost += cost / n_samples * batch_size
# Display logs per epoch step
if epoch % display_step == 0:
print("Epoch:", '%04d' % (epoch + 1), "cost=", avg_cost)
print("Total cost: " + str(autoencoder.calc_total_cost(X_test)))
imageToUse = random.choice(mnist.test.images)
plt.imshow(np.reshape(imageToUse,[28,28]), interpolation="nearest", cmap="gray", clim=(0, 1.0))
plt.show()
# input weights
wts = autoencoder.getWeights()
dim = math.ceil(math.sqrt(autoencoder.n_hidden))
plt.figure(1, figsize=(dim, dim))
for i in range(0,autoencoder.n_hidden):
im = wts.flatten()[i::autoencoder.n_hidden].reshape((28,28))
plt.subplot(dim, dim, i+1)
#plt.title('Feature Weights ' + str(i))
plt.imshow(im, cmap="gray", clim=(-1.0, 1.0))
plt.colorbar()
plt.show()
predicted_imgs = autoencoder.reconstruct(X_test[:100])
# plot the reconstructed images
plt.figure(1, figsize=(10, 10))
plt.title('Autoencoded Images')
for i in range(0,100):
im = predicted_imgs[i].reshape((28,28))
plt.subplot(10, 10, i+1)
plt.imshow(im, cmap="gray", clim=(0.0, 1.0))
plt.show()
【问题讨论】:
【参考方案1】:我不知道这对你有用,但我已经看到它在我自己的网络中促进了一些稀疏性。我建议修改您的损失,以结合使用 softmax 交叉熵(或 KL 散度,如果您愿意)和权重的 l2 正则化损失。我用以下公式计算 l2 损失:
l2 = sum(tf.nn.l2_loss(var) for var in tf.trainable_variables() if not 'biases' in var.name)
这使我只对权重进行正则化,而不是偏差,假设您的偏差张量的名称中有“偏差”(许多 tf.contrib.rnn 库命名偏差张量,以便这样做)。我使用的总体成本函数是:
cost = tf.nn.softmax_or_kl_divergence_or_whatever(labels=labels, logits=logits)
cost = tf.reduce_mean(cost)
cost = cost + beta * l2
其中beta
是网络的一个超参数,然后我在探索我的超参数空间时会改变它。
另一个与此非常相似的选项是使用 l1 正则化。 This is supposed to promote sparsity more than l2 regularization。在我自己的示例中,我并没有明确地试图促进稀疏性,而是将其视为 l2 正则化的结果,但也许 l1 会给你更多的运气。您可以通过以下方式实现 l1 正则化:
l1 = sum(tf.reduce_sum(tf.abs(var)) for var in tf.trainable_variables() if not 'biases' in var.name)
按照上面的成本定义,用l1
代替l2
。
【讨论】:
以上是关于张量流中的稀疏自动编码器成本函数的主要内容,如果未能解决你的问题,请参考以下文章