多层感知器:ConvergenceWarning:随机优化器:达到最大迭代次数,优化尚未收敛。警告?

Posted

技术标签:

【中文标题】多层感知器:ConvergenceWarning:随机优化器:达到最大迭代次数,优化尚未收敛。警告?【英文标题】:multilayer_perceptron : ConvergenceWarning: Stochastic Optimizer: Maximum iterations reached and the optimization hasn't converged yet.Warning? 【发布时间】:2018-02-12 05:04:29 【问题描述】:

我编写了一个基本程序来了解 MLP 分类器中发生了什么?

from sklearn.neural_network import MLPClassifier

数据:标记为男性或女性的身体指标数据集(身高、宽度和鞋码):

X = [[181, 80, 44], [177, 70, 43], [160, 60, 38], [154, 54, 37], [166, 65, 40],
     [190, 90, 47], [175, 64, 39],
     [177, 70, 40], [159, 55, 37], [171, 75, 42], [181, 85, 43]]
y = ['male', 'male', 'female', 'female', 'male', 'male', 'female', 'female',
     'female', 'male', 'male']

准备模型:

 clf= MLPClassifier(hidden_layer_sizes=(3,), activation='logistic',
                       solver='adam', alpha=0.0001,learning_rate='constant', 
                      learning_rate_init=0.001)

火车

clf= clf.fit(X, y)

学习分类器的属性:

print('current loss computed with the loss function: ',clf.loss_)
print('coefs: ', clf.coefs_)
print('intercepts: ',clf.intercepts_)
print(' number of iterations the solver: ', clf.n_iter_)
print('num of layers: ', clf.n_layers_)
print('Num of o/p: ', clf.n_outputs_)

测试

print('prediction: ', clf.predict([  [179, 69, 40],[175, 72, 45] ]))

计算。准确度

print( 'accuracy: ',clf.score( [ [179, 69, 40],[175, 72, 45] ], ['female','male'], sample_weight=None ))

运行1

current loss computed with the loss function:  0.617580287851
coefs:  [array([[ 0.17222046, -0.02541928,  0.02743722],
       [-0.19425909,  0.14586716,  0.17447281],
       [-0.4063903 ,  0.148889  ,  0.02523247]]), array([[-0.66332919],
       [ 0.04249613],
       [-0.10474769]])]
intercepts:  [array([-0.05611057,  0.32634023,  0.51251098]), array([ 0.17996649])]
 number of iterations the solver:  200
num of layers:  3
Num of o/p:  1
prediction:  ['female' 'male']
accuracy:  1.0
/home/anubhav/anaconda3/envs/mytf/lib/python3.6/site-packages/sklearn/neural_network/multilayer_perceptron.py:563: ConvergenceWarning: Stochastic Optimizer: Maximum iterations reached and the optimization hasn't converged yet.
  % (), ConvergenceWarning)

运行2

current loss computed with the loss function:  0.639478303643
coefs:  [array([[ 0.02300866,  0.21547873, -0.1272455 ],
       [-0.2859666 ,  0.40159542,  0.55881399],
       [ 0.39902066, -0.02792529, -0.04498812]]), array([[-0.64446013],
       [ 0.60580985],
       [-0.22001532]])]
intercepts:  [array([-0.10482234,  0.0281211 , -0.16791644]), array([-0.19614561])]
 number of iterations the solver:  39
num of layers:  3
Num of o/p:  1
prediction:  ['female' 'female']
accuracy:  0.5

运行3

current loss computed with the loss function:  0.691966937074
coefs:  [array([[ 0.21882191, -0.48037975, -0.11774392],
       [-0.15890357,  0.06887471, -0.03684797],
       [-0.28321762,  0.48392007,  0.34104955]]), array([[ 0.08672174],
       [ 0.1071615 ],
       [-0.46085333]])]
intercepts:  [array([-0.36606747,  0.21969636,  0.10138625]), array([-0.05670653])]
 number of iterations the solver:  4
num of layers:  3
Num of o/p:  1
prediction:  ['male' 'male']
accuracy:  0.5

运行4:

current loss computed with the loss function:  0.697102567593
coefs:  [array([[ 0.32489731, -0.18529689, -0.08712877],
       [-0.35425908,  0.04214241,  0.41249622],
       [-0.19993622, -0.38873908, -0.33057999]]), array([[ 0.43304555],
       [ 0.37959392],
       [ 0.55998979]])]
intercepts:  [array([ 0.11555407, -0.3473817 , -0.16852093]), array([ 0.31326347])]
 number of iterations the solver:  158
num of layers:  3
Num of o/p:  1
prediction:  ['male' 'male']
accuracy:  0.5

---------------------------------------------- ------------------

我有以下问题:

1.Why in the RUN1 the optimizer did not converge?
2.Why in RUN3 the number of iteration were suddenly becomes so low and in the RUN4 so high?
3.What else can be done to increase the accuracy which I get in RUN1.? 

【问题讨论】:

【参考方案1】:

1:您的 MLP 未收敛: 该算法通过逐步收敛到最小值进行优化,在运行 1 中没有找到您的最小值。

2 运行差异: 您的 MLP 有一些随机起始值,因此您得到的结果与您在数据中看到的结果不同。似乎您在第四次跑步中开始非常接近最低限度。您可以将 MLP 的 random_state 参数更改为常量,例如random_state=0 一遍又一遍地得到相同的结果。

3是最难的一点。 您可以使用优化参数

from sklearn.model_selection import GridSearchCV

Gridsearch 将您的测试集分成大小相等的部分,将一部分用作测试数据,将其余部分用作训练数据。因此,它优化了与您将数据拆分成的部分一样多的分类器。

您需要指定(您的数据很小,所以我建议 2 或 3)您拆分的部分数量、一个分类器(您的 MLP)和一个您想要优化的参数网格,如下所示:

param_grid = [
        
            'activation' : ['identity', 'logistic', 'tanh', 'relu'],
            'solver' : ['lbfgs', 'sgd', 'adam'],
            'hidden_layer_sizes': [
             (1,),(2,),(3,),(4,),(5,),(6,),(7,),(8,),(9,),(10,),(11,), (12,),(13,),(14,),(15,),(16,),(17,),(18,),(19,),(20,),(21,)
             ]
        
       ]

因为您曾经通过三个神经元的隐藏层获得 100% 的准确率,所以您可以尝试优化学习率和动量等参数,而不是隐藏层。

像这样使用 Gridsearch:

clf = GridSearchCV(MLPClassifier(), param_grid, cv=3,
                           scoring='accuracy')
clf.fit(X,y)


print("Best parameters set found on development set:")
print(clf.best_params_)

【讨论】:

以上是关于多层感知器:ConvergenceWarning:随机优化器:达到最大迭代次数,优化尚未收敛。警告?的主要内容,如果未能解决你的问题,请参考以下文章

神经网络入门回顾(感知器多层感知器)

多层感知器 - 找到“分离”曲线

BP神经网络和感知器有啥区别?

keras基于多层感知器的softmax多分类

keras基于多层感知器的softmax多分类

多层感知器 (MLP) Keras 张量流模型