梯度提升树(GBDT)调参小结

Posted 卖山楂啦prss

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了梯度提升树(GBDT)调参小结相关的知识,希望对你有一定的参考价值。

转载自:

刘建平Pinard:https://www.cnblogs.com/pinard/p/6143927.html


import pandas as pd
import numpy as np
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import cross_validate
from sklearn import metrics
from sklearn.model_selection import GridSearchCV

import matplotlib.pylab as plt
%matplotlib inline
train = pd.read_csv('/home/kesci/input/data9405/train_modified.csv')
target='Disbursed' # Disbursed的值就是二元分类的输出
IDcol = 'ID'
train['Disbursed'].value_counts() 
0    19680
1      320
Name: Disbursed, dtype: int64
x_columns = [x for x in train.columns if x not in [target, IDcol]]
X = train[x_columns]
y = train['Disbursed']

不管任何参数,都用默认的,我们拟合下数据看看:

gbm0 = GradientBoostingClassifier(random_state=10)
gbm0.fit(X,y)
y_pred = gbm0.predict(X)
y_predprob = gbm0.predict_proba(X)[:,1]
print ("Accuracy : %.4g" % metrics.accuracy_score(y.values, y_pred))
print ("AUC Score (Train): %f" % metrics.roc_auc_score(y, y_predprob))

Accuracy : 0.9852
AUC Score (Train): 0.900531

输出如上,可见拟合还可以,我们下面看看怎么通过调参提高模型的泛化能力。

首先我们从步长(learning rate)和迭代次数(n_estimators)入手。一般来说,开始选择一个较小的步长来网格搜索最好的迭代次数。这里,我们将步长初始值设置为0.1。对于迭代次数进行网格搜索如下:

n_estimators

param_test1 = {'n_estimators':range(20,81,10)}
gsearch1 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, min_samples_split=300,
                                  min_samples_leaf=20,max_depth=8,max_features='sqrt', subsample=0.8,random_state=10), 
                       param_grid = param_test1, scoring='roc_auc',iid=False,cv=5)
gsearch1.fit(X,y)
print(gsearch1.best_params_)
print(gsearch1.best_score_)

{‘n_estimators’: 60}
0.8192660696138212

输出如上,可见最好的迭代次数是60。

找到了一个合适的迭代次数,现在我们开始对决策树进行调参。首先我们对决策树最大深度max_depth和内部节点再划分所需最小样本数min_samples_split进行网格搜索。

max_depth,min_samples_split

param_test2 = {'max_depth':range(3,14,2), 'min_samples_split':range(100,801,200)}
gsearch2 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, n_estimators=60, min_samples_leaf=20, 
      max_features='sqrt', subsample=0.8, random_state=10), 
   param_grid = param_test2, scoring='roc_auc',iid=False, cv=5)
gsearch2.fit(X,y)
print(gsearch2.best_params_)
print(gsearch2.best_score_)

{‘max_depth’: 7, ‘min_samples_split’: 300}
0.8213724275914632

输出如上,可见最好的最大树深度是7,内部节点再划分所需最小样本数是300。

由于决策树深度7是一个比较合理的值,我们把它定下来,对于内部节点再划分所需最小样本数min_samples_split,我们暂时不能一起定下来,因为这个还和决策树其他的参数存在关联。

下面我们再对内部节点再划分所需最小样本数min_samples_split和叶子节点最少样本数min_samples_leaf一起调参。

min_samples_split,min_samples_leaf

param_test3 = {'min_samples_split':range(800,1900,200), 'min_samples_leaf':range(60,101,10)}
gsearch3 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, n_estimators=60,max_depth=7,
                                     max_features='sqrt', subsample=0.8, random_state=10), 
                       param_grid = param_test3, scoring='roc_auc',iid=False, cv=5)
gsearch3.fit(X,y)
print(gsearch2.best_params_)
print(gsearch2.best_score_)

min_samples_leaf =60,
min_samples_split =1200,

输出结果如上,可见这个min_samples_split在边界值,还有进一步调试小于边界60的必要。由于这里只是例子,所以大家可以自己下来用包含小于60的网格搜索来寻找合适的值。

我们调了这么多参数了,终于可以都放到GBDT类里面去看看效果了。现在我们用新参数拟合数据:

gbm1 = GradientBoostingClassifier(
			learning_rate=0.1, 
			n_estimators=60,
			max_depth=7, 
			min_samples_leaf =60, 
            min_samples_split =1200, 
            max_features='sqrt', 
            subsample=0.8, 
            random_state=10)
gbm1.fit(X,y)
y_pred = gbm1.predict(X)
y_predprob = gbm1.predict_proba(X)[:,1]
print ("Accuracy : %.4g" % metrics.accuracy_score(y.values, y_pred))
print ("AUC Score (Train): %f" % metrics.roc_auc_score(y, y_predprob))

Accuracy : 0.984
AUC Score (Train): 0.908099

对比我们最开始完全不调参的拟合效果,可见精确度稍有下降,主要原理是我们使用了0.8的子采样,20%的数据没有参与拟合。

现在我们再对最大特征数max_features进行网格搜索。

max_features

param_test4 = {'max_features':range(7,20,2)}
gsearch4 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, n_estimators=60,max_depth=7, min_samples_leaf =60, 
               min_samples_split =1200, subsample=0.8, random_state=10), 
                       param_grid = param_test4, scoring='roc_auc',iid=False, cv=5)
gsearch4.fit(X,y)
print(gsearch4.best_params_)
print(gsearch4.best_score_)

{‘max_features’: 9}
0.822412506351626

现在我们再对子采样的比例进行网格搜索:

subsample

param_test5 = {'subsample':[0.6,0.7,0.75,0.8,0.85,0.9]}
gsearch5 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1, n_estimators=60,max_depth=7, min_samples_leaf =60, 
               min_samples_split =1200, max_features=9, random_state=10), 
                       param_grid = param_test5, scoring='roc_auc',iid=False, cv=5)
gsearch5.fit(X,y)
print(gsearch5.best_params_)
print(gsearch5.best_score_)

{‘subsample’: 0.7}
0.8234378969766262

现在我们基本已经得到我们所有调优的参数结果了。这时我们可以减半步长,最大迭代次数加倍来增加我们模型的泛化能力。再次拟合我们的模型:

gbm2 = GradientBoostingClassifier(
		learning_rate=0.05, 
		n_estimators=120,
		max_depth=7, 
		min_samples_leaf =60, 
        min_samples_split =1200, 
        max_features=9, 
        subsample=0.7, 
        random_state=10)
        
gbm2.fit(X,y)
y_pred = gbm2.predict(X)
y_predprob = gbm2.predict_proba(X)[:,1]
print ("Accuracy : %.4g" % metrics.accuracy_score(y.values, y_pred))
print ("AUC Score (Train): %f" % metrics.roc_auc_score(y, y_predprob))

Accuracy : 0.984
AUC Score (Train): 0.905324

可以看到AUC分数比起之前的版本稍有下降,这个原因是我们为了增加模型泛化能力,为防止过拟合而减半步长,最大迭代次数加倍,同时减小了子采样的比例,从而减少了训练集的拟合程度。

下面我们继续将步长缩小5倍,最大迭代次数增加5倍,继续拟合我们的模型:

gbm3 = GradientBoostingClassifier(
			learning_rate=0.01, 
			n_estimators=600,
			max_depth=7, 
			min_samples_leaf =60, 
            min_samples_split =1200, 
            max_features=9, 
            subsample=0.7, 
            random_state=10)
gbm3.fit(X,y)
y_pred = gbm3.predict(X)
y_predprob = gbm3.predict_proba(X)[:,1]
print ("Accuracy : %.4g" % metrics.accuracy_score(y.values, y_pred))
print ("AUC Score (Train): %f" % metrics.roc_auc_score(y, y_predprob))

Accuracy : 0.984
AUC Score (Train): 0.908581

可见减小步长增加迭代次数可以在保证泛化能力的基础上增加一些拟合程度。

最后我们继续步长缩小一半,最大迭代次数增加2倍,拟合我们的模型:

gbm4 = GradientBoostingClassifier(
			learning_rate=0.005, 
			n_estimators=1200,
			max_depth=7, 
			min_samples_leaf =60, 
            min_samples_split =1200, 
            max_features=9, 
            subsample=0.7, 
            random_state=10)
gbm4.fit(X,y)
y_pred = gbm4.predict(X)
y_predprob = gbm4.predict_proba(X)[:,1]
print ("Accuracy : %.4g" % metrics.accuracy_score(y.values, y_pred))
print ("AUC Score (Train): %f" % metrics.roc_auc_score(y, y_predprob))

Accuracy : 0.984
AUC Score (Train): 0.908232

输出如上,此时由于步长实在太小,导致拟合效果反而变差,也就是说,步长不能设置的过小。

以上是关于梯度提升树(GBDT)调参小结的主要内容,如果未能解决你的问题,请参考以下文章

梯度提升树(GBDT)调参小结

梯度提升树(GBDT)调参小结

GBDT调参经验

梯度提升树(GBDT)原理小结

梯度提升树(GBDT)原理小结

GBDT与XGBoost