网格搜索分类的自定义评分功能
Posted
技术标签:
【中文标题】网格搜索分类的自定义评分功能【英文标题】:Custom scoring function for grid search classification 【发布时间】:2018-10-27 00:13:35 【问题描述】:我想在 scikit-learn 中为 RandomForestClassifier
执行 GridSearchCV
,并且我有一个我想使用的自定义评分函数。
只有在提供概率的情况下,评分函数才会起作用(例如,必须调用 rfc.predict_proba(...)
而不是 rfc.predict(...)
)
如何指示 GridSearchCV 使用 predict_proba()
而不是 predict()
?
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
def my_custom_loss_func(ground_truth, predictions):
# predictions must be probabilities - e.g. model.predict_proba()
# example code here:
diff = np.abs(ground_truth - predictions).max()
return np.log(1 + diff)
param_grid = 'min_samples_leaf': [1, 2, 5, 10, 20, 50, 100], 'n_estimators': [100, 200, 300]
grid = GridSearchCV(RandomForestClassifier(), param_grid=param_grid,
scoring=my_custom_loss_func)
【问题讨论】:
【参考方案1】:参见文档here:可调用对象应具有参数(估计器、X、y)
然后你可以在你的定义中使用estimator.predict_proba(X)
或者,您可以将make_scorer 与needs_proba=True
一起使用
完整的代码示例:
from sklearn.datasets import make_classification
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import make_scorer
import pandas as pd
import numpy as np
X, y = make_classification()
def my_custom_loss_func_est(estimator, X, y):
# predictions must be probabilities - e.g. model.predict_proba()
# example code here:
diff = np.abs(y - estimator.predict_proba(X)[:, 1]).max()
return -np.log(1 + diff)
def my_custom_loss_func(ground_truth, predictions):
# predictions must be probabilities - e.g. model.predict_proba()
# example code here:
diff = np.abs(ground_truth - predictions[:, 1]).max()
return np.log(1 + diff)
custom_scorer = make_scorer(my_custom_loss_func,
greater_is_better=False,
needs_proba=True)
使用记分器对象:
param_grid = 'min_samples_leaf': [10, 50], 'n_estimators': [100, 200]
grid = GridSearchCV(RandomForestClassifier(), param_grid=param_grid,
scoring=custom_scorer, return_train_score=True)
grid.fit(X, y)
pd.DataFrame(grid.cv_results_)[['mean_test_score',
'mean_train_score',
'param_min_samples_leaf',
'param_n_estimators']]
mean_test_score mean_train_score param_min_samples_leaf param_n_estimators
0 -0.505201 -0.495011 10 100
1 -0.509190 -0.498283 10 200
2 -0.406279 -0.406292 50 100
3 -0.406826 -0.406862 50 200
直接使用损失函数也很简单
grid = GridSearchCV(RandomForestClassifier(), param_grid=param_grid,
scoring=my_custom_loss_func_est, return_train_score=True)
grid.fit(X, y)
pd.DataFrame(grid.cv_results_)[['mean_test_score',
'mean_train_score',
'param_min_samples_leaf',
'param_n_estimators']]
mean_test_score mean_train_score param_min_samples_leaf param_n_estimators
0 -0.509098 -0.491462 10 100
1 -0.497693 -0.490936 10 200
2 -0.409025 -0.408957 50 100
3 -0.409525 -0.409500 50 200
结果因 cv 折叠不同而不同(我假设,但我现在懒得设置种子并再次编辑(或者有没有更好的方法来粘贴代码而无需手动缩进所有内容?))
【讨论】:
使用sklearn==0.23.2
,当使用记分器对象(scoring=custom_scorer
)时,它会给出这个错误IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed
,它指向这一行diff = np.abs(ground_truth - predictions[:, 1]).max()
。似乎暗示predictions
是一个一维数组,但它像二维数组一样被索引。这很令人困惑,因为.predict_proba()
应该返回一个二维数组,所以使用[:, 1]
对其进行索引应该没问题。关于可能导致此错误的任何线索?以上是关于网格搜索分类的自定义评分功能的主要内容,如果未能解决你的问题,请参考以下文章
GridSearchCV 的 sklearn 中的自定义“k 精度”评分对象