Tensorflow 自动编码器成本没有降低?

Posted

技术标签:

【中文标题】Tensorflow 自动编码器成本没有降低?【英文标题】:Tensorflow autoencoder cost not decreasing? 【发布时间】:2016-10-06 14:11:54 【问题描述】:

我正在使用使用 Tensorflow 的自动编码器进行无监督特征学习。我为 Amazon csv 数据集编写了以下代码,当我运行它时,每次迭代的成本都不会降低。你能帮我找出代码中的错误吗?

from __future__ import division, print_function, absolute_import

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
df=pd.read_csv('../dataset/amazon_1_b.csv')
df=df.drop(df.columns[0], axis=1)
#df1, df2 = df[:25000, :], df[25000:, :] if len(df) > 25000 else df, None
df1=df.head(25000)
df2=df.tail(len(df)-25000)
trY=df1['ACTION'].as_matrix()
teY=df2['ACTION'].as_matrix()
df1=df1.drop(df.columns[9], axis=1)
df2=df2.drop(df.columns[9], axis=1)
trX=df1.as_matrix()
teX=df2.as_matrix()



# Parameters
learning_rate = 0.01
training_epochs = 50
batch_size = 20
display_step = 1
examples_to_show = 10

# Network Parameters
n_hidden_1 = 20 # 1st layer num features
n_hidden_2 = 5 # 2nd layer num features
n_input = trX.shape[1] # MNIST data input (img shape: 28*28)

# tf Graph input (only pictures)
X = tf.placeholder("float", [None, n_input])

weights = 
    'encoder_h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
    'encoder_h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
    'decoder_h1': tf.Variable(tf.random_normal([n_hidden_2, n_hidden_1])),
    'decoder_h2': tf.Variable(tf.random_normal([n_hidden_1, n_input])),

biases = 
    'encoder_b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'encoder_b2': tf.Variable(tf.random_normal([n_hidden_2])),
    'decoder_b1': tf.Variable(tf.random_normal([n_hidden_1])),
    'decoder_b2': tf.Variable(tf.random_normal([n_input])),




# Building the encoder
def encoder(x):
    # Encoder Hidden layer with sigmoid activation #1
    layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['encoder_h1']),
                                   biases['encoder_b1']))
    # Decoder Hidden layer with sigmoid activation #2
    layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['encoder_h2']),
                                   biases['encoder_b2']))
    return layer_2


# Building the decoder
def decoder(x):
    # Encoder Hidden layer with sigmoid activation #1
    layer_1 = tf.nn.sigmoid(tf.add(tf.matmul(x, weights['decoder_h1']),
                                   biases['decoder_b1']))
    # Decoder Hidden layer with sigmoid activation #2
    layer_2 = tf.nn.sigmoid(tf.add(tf.matmul(layer_1, weights['decoder_h2']),
                                   biases['decoder_b2']))
    return layer_2

# Construct model
encoder_op = encoder(X)
decoder_op = decoder(encoder_op)

# Prediction
y_pred = decoder_op
# Targets (Labels) are the input data.
y_true = X

# Define loss and optimizer, minimize the squared error
cost = tf.reduce_mean(tf.pow(y_true - y_pred, 2))
optimizer = tf.train.RMSPropOptimizer(learning_rate).minimize(cost)

# Initializing the variables
init = tf.initialize_all_variables()



# Launch the graph
# Using InteractiveSession (more convenient while using Notebooks)
sess = tf.InteractiveSession()
sess.run(init)

total_batch = int(trX.shape[0]/batch_size)
# Training cycle
for epoch in range(training_epochs):
    # Loop over all batches
    for i in range(total_batch):
        batch_xs= trX[batch_size*i:batch_size*(i+1)]
        # Run optimization op (backprop) and cost op (to get loss value)
        _, c = sess.run([optimizer, cost], feed_dict=X: batch_xs)
    # Display logs per epoch step
    if epoch % display_step == 0:
        print("Epoch:", '%04d' % (epoch+1),
              "cost=", ":.9f".format(c))

print("Optimization Finished!")

# Applying encode and decode over test set
encode_decode = sess.run(
    y_pred, feed_dict=X: teX)

数据集的链接是here。 python文件的链接是here。

以下是直到 31 个 epoch 的结果,并且在所有 50 个 epoch 之前都保持不变。

Epoch: 0001 cost= 18134403072.000000000
Epoch: 0002 cost= 18134403072.000000000
Epoch: 0003 cost= 18134403072.000000000
Epoch: 0004 cost= 18134403072.000000000
Epoch: 0005 cost= 18134403072.000000000
Epoch: 0006 cost= 18134403072.000000000
Epoch: 0007 cost= 18134403072.000000000
Epoch: 0008 cost= 18134403072.000000000
Epoch: 0009 cost= 18134403072.000000000
Epoch: 0010 cost= 18134403072.000000000
Epoch: 0011 cost= 18134403072.000000000
Epoch: 0012 cost= 18134403072.000000000
Epoch: 0013 cost= 18134403072.000000000
Epoch: 0014 cost= 18134403072.000000000
Epoch: 0015 cost= 18134403072.000000000
Epoch: 0016 cost= 18134403072.000000000
Epoch: 0017 cost= 18134403072.000000000
Epoch: 0018 cost= 18134403072.000000000
Epoch: 0019 cost= 18134403072.000000000
Epoch: 0020 cost= 18134403072.000000000
Epoch: 0021 cost= 18134403072.000000000
Epoch: 0022 cost= 18134403072.000000000
Epoch: 0023 cost= 18134403072.000000000
Epoch: 0024 cost= 18134403072.000000000
Epoch: 0025 cost= 18134403072.000000000
Epoch: 0026 cost= 18134403072.000000000
Epoch: 0027 cost= 18134403072.000000000
Epoch: 0028 cost= 18134403072.000000000
Epoch: 0029 cost= 18134403072.000000000
Epoch: 0030 cost= 18134403072.000000000
Epoch: 0031 cost= 18134403072.000000000

【问题讨论】:

“每次迭代的成本都不会降低”是什么意思。成本不应该在每次迭代中降低,而是一般会降低 我的意思是即使在 100 个 epoch 之后成本也保持不变。在整个计划中它根本没有减少。 (1) 能否切换到使用不同的激活函数 (2) 能否更改初始权重的归一化(不要使用均值 = 0,标准值 = 1) - 使用 Xavier 初始化 你能转发数据集吗?您的自动编码器网络似乎不正确。 【参考方案1】:

在这种情况下,您的优化方法RMSPropOptimizer 似乎真的很慢。

您可能想尝试 adams 的解决方案,至少这对我有用。

【讨论】:

以上是关于Tensorflow 自动编码器成本没有降低?的主要内容,如果未能解决你的问题,请参考以下文章

如何在 TensorFlow 2 中保存/加载模型的一部分?

TensorFlow 的 sparse_softmax_cross_entropy 中的 Logits 表示

如何降低测试和自动化的成本

tensorflow,训练后拆分自动编码器

Tensorflow 自动编码器 - 如何计算重构误差?

Tensorflow 可变图像输入大小(自动编码器,放大...)