递归特征选择未必能获得更高的性能?
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了递归特征选择未必能获得更高的性能?相关的知识,希望对你有一定的参考价值。
我在尝试分析下面的数据,先用逻辑回归建模,然后做预测,计算出准确率&auc;然后进行递归特征选择,再计算准确率&auc,以为准确率和auc会更高,但实际上在递归特征选择后,它们都降低了,不知道是不是预料之中?还是我遗漏了什么? 谢谢
数据。https:/github.comamandawang-devcensus-trainingblobmastercensus-training.csv
为逻辑回归,准确率:0.8111649491571692;AUC。0.824896256487386
递归特征选择后,准确率:0.8130075752405651;AUC。0.7997315631730443
import pandas as pd
import numpy as np
from sklearn import preprocessing, metrics
from sklearn.model_selection import train_test_split
train=pd.read_csv('census-training.csv')
train = train.replace('?', np.nan)
for column in train.columns:
train[column].fillna(train[column].mode()[0], inplace=True)
x['Income'] = x['Income'].str.contains('>50K').astype(int)
x['Gender'] = x['Gender'].str.contains('Male').astype(int)
obj = train.select_dtypes(include=['object']) #all features that are 'object' datatypes
le = preprocessing.LabelEncoder()
for i in range(len(obj.columns)):
train[obj.columns[i]] = le.fit_transform(train[obj.columns[i]])#TODO #Encode input data
train_set, test_set = train_test_split(train, test_size=0.3, random_state=42)
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import confusion_matrix, classification_report, roc_curve, roc_auc_score
from sklearn.metrics import accuracy_score
log_rgr = LogisticRegression(random_state=0)
X_train=train_set.iloc[:, 0:9]
y_train=train_set.iloc[:, 9:10]
X_test=test_set.iloc[:, 0:9]
y_test=test_set.iloc[:, 9:10]
log_rgr.fit(X_train, y_train)
y_pred = log_rgr.predict(X_test)
lr_acc = accuracy_score(y_test, y_pred)
probs = log_rgr.predict_proba(X_test)
preds = probs[:,1]
print(preds)
from sklearn.preprocessing import label_binarize
y = label_binarize(y_test, classes=[0, 1]) #note to myself: class need to have only 0,1
fpr, tpr, threshold = metrics.roc_curve(y, preds)
roc_auc = roc_auc_score(y_test, preds)
print("Accuracy: ".format(lr_acc))
print("AUC: ".format(roc_auc))
from sklearn.feature_selection import RFE
rfe = RFE(log_rgr, 5)
fit = rfe.fit(X_train, y_train)
X_train_new = fit.transform(X_train)
X_test_new = fit.transform(X_test)
log_rgr.fit(X_train_new, y_train)
y_pred = log_rgr.predict(X_test_new)
lr_acc = accuracy_score(y_test, y_pred)
probs = rfe.predict_proba(X_test)
preds = probs[:,1]
y = label_binarize(y_test, classes=[0, 1])
fpr, tpr, threshold = metrics.roc_curve(y, preds)
roc_auc =roc_auc_score(y_test, preds)
print("Accuracy: ".format(lr_acc))
print("AUC: ".format(roc_auc))
答案
根本没有 无保证 任何类型的特征选择(后向、前向、递归--你说的)实际上都会在总体上带来更好的性能。完全没有。这种工具只是为了方便而存在--它们可能有用,也可能没用。最好的指导和最终的判断永远是实验。
除了线性或逻辑回归中的一些非常特殊的情况,最明显的是Lasso(无巧不成书,其实是来自于统计学),或者有些极端的情况下,用 太多 特征 维度的诅咒),即使是有效的(或无效的),也不会有 一定 多解释为什么(或为什么不)。
以上是关于递归特征选择未必能获得更高的性能?的主要内容,如果未能解决你的问题,请参考以下文章