通过应用 RFE 选择给出最佳调整 R 平方值的特征子集

Posted

技术标签:

【中文标题】通过应用 RFE 选择给出最佳调整 R 平方值的特征子集【英文标题】:Selecting Subset of Features that Gives Best Adjusted R-squared Value by Applying RFE 【发布时间】:2019-08-11 12:43:59 【问题描述】:

我有两个目标。我想:

    遍历特征值 1-10,然后 比较调整后的 R 平方 值。

我知道如何仅针对以下代码中显示的 1 个固定功能执行此操作。我试图循环输入selector = RFE(regr, n_features_to_select, step=1),但我认为我错过了这个难题的关键部分。谢谢!

from sklearn.feature_selection import RFE
regr = LinearRegression()
#parameters: estimator, n_features_to_select=None, step=1

selector = RFE(regr, 5, step=1)
selector.fit(x_train, y_train)
selector.support_

def show_best_model(support_array, columns, model):
    y_pred = model.predict(X_test.iloc[:, support_array])
    r2 = r2_score(y_test, y_pred)
    n = len(y_pred) #size of test set
    p = len(model.coef_) #number of features
    adjusted_r2 = 1-(1-r2)*(n-1)/(n-p-1)
    print('Adjusted R-squared: %.2f' % adjusted_r2)
    j = 0;
        for i in range(len(support_array)):
        if support_array[i] == True:
            print(columns[i], model.coef_[j])
            j +=1


show_best_model(selector.support_, x_train.columns, selector.estimator_)

【问题讨论】:

我建议您将其中一个标签更改为 scikit-learn,因为您会让更多具有该专业知识的人看到您的问题。 【参考方案1】:

您可以创建自定义GridSearchCV,它对估算器的指定参数值执行详尽搜索。

您还可以选择任何可用的评分函数,例如 Scikit-learn 中的R2 Score。但是,您可以使用给定 here 的简单公式从 R2 得分计算 Adjusted R2,然后在自定义 GridSearchCV 中实现它.

from collections import OrderedDict
from itertools import product
from sklearn.feature_selection import RFE
from sklearn.linear_model import LinearRegression
from sklearn.datasets import load_iris
from sklearn.metrics import r2_score
from sklearn.model_selection import StratifiedKFold


def customR2Score(y_true, y_pred, n, p):
    """
    Workaround for the adjusted R^2 score
    :param y_true: Ground Truth during iterations
    :param y_pred: Y predicted during iterations
    :param n: the sample size
    :param p: the total number of explanatory variables in the model
    :return: float, adjusted R^2 score
    """
    r2 = r2_score(y_true, y_pred)
    return 1 - (1 - r2) * (n - 1) / (n - p - 1)


def CustomGridSearchCV(X, Y, param_grid, n_splits=10, n_repeats=3):
    """
    Perform GridSearchCV using adjusted R^2 as Scoring.
    Note here we are performing GridSearchCV MANUALLY because adjusted R^2
    cannot be used directly in the GridSearchCV function builtin in Scikit-learn
    :param X: array_like, shape (n_samples, n_features), Samples.
    :param Y: array_like, shape (n_samples, ), Target values.
    :param param_grid: Dictionary with parameters names (string) as keys and lists
                       of parameter settings to try as values, or a list of such
                       dictionaries, in which case the grids spanned by each dictionary
                       in the list are explored. This enables searching over any
                       sequence of parameter settings.
    :param n_splits: Number of folds. Must be at least 2. default=10
    :param n_repeats: Number of times cross-validator needs to be repeated. default=3
    :return: an Ordered Dictionary of the model object and information and best parameters
    """
    best_model = OrderedDict()
    best_model['best_params'] = 
    best_model['best_train_AdjR2'], best_model['best_cross_AdjR2'] = 0, 0
    best_model['best_model'] = None

    allParams = OrderedDict()
    for key, value in param_grid.items():
        allParams[key] = value

    for items in product(*allParams.values()):
        params = 
        i = 0
        for k in allParams.keys():
            params[k] = items[i]
            i += 1
        # at this point, we get different combination of parameters
        model_ = RFE(**params)
        avg_AdjR2_train = 0.
        avg_AdjR2_cross = 0.
        for rep in range(n_repeats):
            skf = StratifiedKFold(n_splits=n_splits, shuffle=True)
            AdjR2_train = 0.
            AdjR2_cross = 0.
            for train_index, cross_index in skf.split(X, Y):
                x_train, x_cross = X[train_index], X[cross_index]
                y_train, y_cross = Y[train_index], Y[cross_index]
                model_.fit(x_train, y_train)
                # find Adjusted R2 of train and cross
                y_pred_train = model_.predict(x_train)
                y_pred_cross = model_.predict(x_cross)
                AdjR2_train += customR2Score(y_train, y_pred_train, len(y_train), model_.n_features_)
                AdjR2_cross += customR2Score(y_cross, y_pred_cross, len(y_cross), model_.n_features_)
            AdjR2_train /= n_splits
            AdjR2_cross /= n_splits
            avg_AdjR2_train += AdjR2_train
            avg_AdjR2_cross += AdjR2_cross
        avg_AdjR2_train /= n_repeats
        avg_AdjR2_cross /= n_repeats
        # store the results of the first set of parameters combination
        if abs(avg_AdjR2_cross) >= abs(best_model['best_cross_AdjR2']):
            best_model['best_params'] = params
            best_model['best_train_AdjR2'] = avg_AdjR2_train
            best_model['best_cross_AdjR2'] = avg_AdjR2_cross
            best_model['best_model'] = model_

    return best_model



# Dataset for testing
iris = load_iris()
X = iris.data
Y = iris.target


regr = LinearRegression()

param_grid = 'estimator': [regr],  # you can try different estimator
              'n_features_to_select': range(1, X.shape[1] + 1)

best_model = CustomGridSearchCV(X, Y, param_grid, n_splits=5, n_repeats=2)

print(best_model)
print(best_model['best_model'].ranking_)
print(best_model['best_model'].support_)

测试结果

OrderedDict([
('best_params', 'n_features_to_select': 3, 'estimator': 
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)), 
('best_train_AdjR2', 0.9286382985850505), ('best_cross_AdjR2', 0.9188172567358479),
('best_model', RFE(estimator=LinearRegression(copy_X=True, fit_intercept=True, 
 n_jobs=1, normalize=False), n_features_to_select=3, step=1, verbose=0))])

[1 2 1 1]

[ True False  True  True]

【讨论】:

@ron 如果这个答案对你有帮助,请接受它:)【参考方案2】:

感谢叶海亚的回复。我还没有机会测试它。我对 python 还很陌生,所以我会尝试从你的回复中学习。

话虽如此,我找到了解决问题的方法。这是为未来的学习者准备的。

def show_best_model(support_array, columns, model):
    y_pred = model.predict(X_test.iloc[:, support_array])
    r2 = r2_score(y_test, y_pred)
    n = len(y_pred) #size of test set
    p = len(model.coef_) #number of features
    adjusted_r2 = 1-(1-r2)*(n-1)/(n-p-1)
    print('Adjusted R-squared: %.2f' % adjusted_r2)
    j = 0;
    for i in range(len(support_array)):
        if support_array[i] == True:
            print(columns[i], model.coef_[j])
            j +=1

from sklearn.feature_selection import RFE
regr = LinearRegression()

for m in range(1,11):
    selector = RFE(regr, m, step=1) 
    selector.fit(x_train, y_train)
    if m<11:
        show_best_model(selector.support_, x_train.columns, selector.estimator_)

X = df.loc[:,['Age_08_04', 'KM', 'HP', 'Weight', 'Automatic_airco']]
x_train, X_test, y_train, y_test = train_test_split(X, y,
                                                    test_size =.4,
                                                    random_state = 20)
regr = LinearRegression()
regr.fit(x_train, y_train)
y_pred = regr.predict(X_test)
print('Average error: %.2f' %mean(y_test - y_pred))
print('Mean absolute error: %.2f' %mean_absolute_error(y_test, y_pred))
print('Mean absolute error: %.2f' %(mean(abs(y_test - y_pred))))
print("Root mean squared error: %.2f"
      % math.sqrt(mean_squared_error(y_test, y_pred)))
print('percentage absolute error: %.2f' %mean(abs((y_test - y_pred)/y_test)))
print('percentage absolute error: %.2f' %(mean(abs(y_test - y_pred))/mean(y_test)))
print('R-squared: %.2f' % r2_score(y_test, y_pred))

x_train = x_train.loc[:,
                      ['Age_08_04', 'KM' , 'HP',
                       'Weight', 'Automatic_airco']]
X_test = X_test.loc[:,
                    ['Age_08_04', 'KM' , 'HP',
                     'Weight', 'Automatic_airco']]
selector = RFE(regr, 5, step=1)
selector.fit(x_train, y_train)
show_best_model(selector.support_, x_train.columns, selector.estimator_)

【讨论】:

唯一缺少的是它不会为您比较值。您必须比较 r 平方的值,然后使用该数量的特征。 亲爱的@ron,我的方法是一种标准方法,它确实比较值并选择最好的,正如你所说,一旦你熟悉了 Python,你就会明白我的直截了当的解决方案:) 顺便说一句,如果你的意思是比较你想查看元数据,print() 是你的朋友:)

以上是关于通过应用 RFE 选择给出最佳调整 R 平方值的特征子集的主要内容,如果未能解决你的问题,请参考以下文章

R 在 RFE(递归特征消除)中使用我自己的模型来选择重要特征

R语言层次聚类:通过内平方和(Within Sum of Squares, WSS)选择最优的聚类K值以内平方和(WSS)和K的关系并通过弯头法(elbow method)获得最优的聚类个数

R语言层次聚类:通过内平方和WSS选择最优的聚类K值可视化不同K下的BSS和WSS通过Calinski-Harabasz指数(准则)与聚类簇个数的关系获取最优聚类簇的个数

单变量最小二乘回归中的多重 R 平方和调整 R 平方有啥区别?

R中SVM-RFE算法的实现

插入符号 rfe + 和 ROC 中的特征选择