神经网络 MNIST:反向传播是正确的,但训练/测试精度非常低
Posted
技术标签:
【中文标题】神经网络 MNIST:反向传播是正确的,但训练/测试精度非常低【英文标题】:Neural Network MNIST: Backpropagation is correct, but training/test accuracy very low 【发布时间】:2018-01-08 21:44:42 【问题描述】:我正在构建一个神经网络来学习识别来自 MNIST 的手写数字。我已经确认反向传播可以完美地计算梯度(梯度检查给出的误差
看来,无论我如何训练权重,成本函数总是趋向于 3.24-3.25 左右(从不低于该值,只是从上方接近),并且训练/测试集的准确度非常低(大约 11% 为测试集)。看起来最终的 h 值都非常接近 0.1 并且彼此非常接近。
我找不到为什么我的程序不能产生更好的结果。我想知道是否有人可以查看我的代码,请告诉我发生这种情况的任何原因。非常感谢您的帮助,我真的很感激!
这是我的 Python 代码:
import numpy as np
import math
from tensorflow.examples.tutorials.mnist import input_data
# Neural network has four layers
# The input layer has 784 nodes
# The two hidden layers each have 5 nodes
# The output layer has 10 nodes
num_layer = 4
num_node = [784,5,5,10]
num_output_node = 10
# 30000 training sets are used
# 10000 test sets are used
# Can be adjusted
Ntrain = 30000
Ntest = 10000
# Sigmoid Function
def g(X):
return 1/(1 + np.exp(-X))
# Forwardpropagation
def h(W,X):
a = X
for l in range(num_layer - 1):
a = np.insert(a,0,1)
z = np.dot(a,W[l])
a = g(z)
return a
# Cost Function
def J(y, W, X, Lambda):
cost = 0
for i in range(Ntrain):
H = h(W,X[i])
for k in range(num_output_node):
cost = cost + y[i][k] * math.log(H[k]) + (1-y[i][k]) * math.log(1-H[k])
regularization = 0
for l in range(num_layer - 1):
for i in range(num_node[l]):
for j in range(num_node[l+1]):
regularization = regularization + W[l][i+1][j] ** 2
return (-1/Ntrain * cost + Lambda / (2*Ntrain) * regularization)
# Backpropagation - confirmed to be correct
# Algorithm based on https://www.coursera.org/learn/machine-learning/lecture/1z9WW/backpropagation-algorithm
# Returns D, the value of the gradient
def BackPropagation(y, W, X, Lambda):
delta = np.empty(num_layer-1, dtype = object)
for l in range(num_layer - 1):
delta[l] = np.zeros((num_node[l]+1,num_node[l+1]))
for i in range(Ntrain):
A = np.empty(num_layer-1, dtype = object)
a = X[i]
for l in range(num_layer - 1):
A[l] = a
a = np.insert(a,0,1)
z = np.dot(a,W[l])
a = g(z)
diff = a - y[i]
delta[num_layer-2] = delta[num_layer-2] + np.outer(np.insert(A[num_layer-2],0,1),diff)
for l in range(num_layer-2):
index = num_layer-2-l
diff = np.multiply(np.dot(np.array([W[index][k+1] for k in range(num_node[index])]), diff), np.multiply(A[index], 1-A[index]))
delta[index-1] = delta[index-1] + np.outer(np.insert(A[index-1],0,1),diff)
D = np.empty(num_layer-1, dtype = object)
for l in range(num_layer - 1):
D[l] = np.zeros((num_node[l]+1,num_node[l+1]))
for l in range(num_layer-1):
for i in range(num_node[l]+1):
if i == 0:
for j in range(num_node[l+1]):
D[l][i][j] = 1/Ntrain * delta[l][i][j]
else:
for j in range(num_node[l+1]):
D[l][i][j] = 1/Ntrain * (delta[l][i][j] + Lambda * W[l][i][j])
return D
# Neural network - this is where the learning/adjusting of weights occur
# W is the weights
# learn is the learning rate
# iterations is the number of iterations we pass over the training set
# Lambda is the regularization parameter
def NeuralNetwork(y, X, learn, iterations, Lambda):
W = np.empty(num_layer-1, dtype = object)
for l in range(num_layer - 1):
W[l] = np.random.rand(num_node[l]+1,num_node[l+1])/100
for k in range(iterations):
print(J(y, W, X, Lambda))
D = BackPropagation(y, W, X, Lambda)
for l in range(num_layer-1):
W[l] = W[l] - learn * D[l]
print(J(y, W, X, Lambda))
return W
mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
# Training data, read from MNIST
inputpix = []
output = []
for i in range(Ntrain):
inputpix.append(2 * np.array(mnist.train.images[i]) - 1)
output.append(np.array(mnist.train.labels[i]))
np.savetxt('input.txt', inputpix, delimiter=' ')
np.savetxt('output.txt', output, delimiter=' ')
# Train the weights
finalweights = NeuralNetwork(output, inputpix, 2, 5, 1)
# Test data
inputtestpix = []
outputtest = []
for i in range(Ntest):
inputtestpix.append(2 * np.array(mnist.test.images[i]) - 1)
outputtest.append(np.array(mnist.test.labels[i]))
np.savetxt('inputtest.txt', inputtestpix, delimiter=' ')
np.savetxt('outputtest.txt', outputtest, delimiter=' ')
# Determine the accuracy of the training data
count = 0
for i in range(Ntrain):
H = h(finalweights,inputpix[i])
print(H)
for j in range(num_output_node):
if H[j] == np.amax(H) and output[i][j] == 1:
count = count + 1
print(count/Ntrain)
# Determine the accuracy of the test data
count = 0
for i in range(Ntest):
H = h(finalweights,inputtestpix[i])
print(H)
for j in range(num_output_node):
if H[j] == np.amax(H) and outputtest[i][j] == 1:
count = count + 1
print(count/Ntest)
【问题讨论】:
你能把一个标签改成python吗?因此代码将被适当地突出显示。 【参考方案1】:您的网络很小,5 个神经元使其基本上是一个线性模型。将其增加到每层 256 个。
请注意,平凡线性模型有 768 * 10 + 10 个(偏差)参数,总计 7690 个浮点数。另一方面,您的神经网络有 768 * 5 + 5 + 5 * 5 + 5 + 5 * 10 + 10 = 3845 + 30 + 60 = 3935。换句话说,尽管是非线性神经网络,但它实际上是一个比一个简单的逻辑回归应用于这个问题。逻辑回归本身会产生大约 11% 的误差,因此你不能指望打败它。当然,这不是一个严格的论点,但应该让您对它为什么不起作用有一些直觉。
第二个问题与其他超参数有关,您似乎正在使用:
巨大的学习率(是 2 吗?)应该是 0.0001 阶 很少的训练迭代(你只是执行 5 个 epoch 吗?) 您的正则化参数很大(设置为 1),因此您的网络再次因学习任何内容而受到严重惩罚 - 将其更改为更小的数量级【讨论】:
看来主要问题是我使用了错误的激活函数。然而,每层添加更多的神经元对准确性有很大帮助。我只是想知道,如何选择每层的所有超参数和神经元数量?非常感谢您的帮助! sigmoid 激活很好,不是最好的,但对于 MNIST 来说应该没问题。设置超参数有点疯狂 - 有一些经验法则(比如能够告诉你显然没有使用足够的东西),但通常它主要是模型和/或大量搜索/检查不同事物的经验.例如,如果训练错误卡住了,那么您的模型可能缺乏容量(因此神经元太少)等,但这里没有“硬”规则 有趣,是的,我在设置超参数时会牢记这些。非常感谢您所做的一切,我真的很感激!【参考方案2】:NN 架构很可能欠拟合。也许,学习率高/低。或者正则化参数存在大多数问题。
【讨论】:
以上是关于神经网络 MNIST:反向传播是正确的,但训练/测试精度非常低的主要内容,如果未能解决你的问题,请参考以下文章