使用 Scikit-learn 确定 RF 模型中每个类别的特征重要性

Posted

技术标签:

【中文标题】使用 Scikit-learn 确定 RF 模型中每个类别的特征重要性【英文标题】:Using Scikit-learn to determine feature importances per class in a RF model 【发布时间】:2018-10-16 12:49:24 【问题描述】:

我有一个dataset,它遵循 one-hot 编码模式,我的因变量也是二进制的。我的代码的第一部分列出了整个数据集的重要变量。我使用了这个 *** 帖子“Using scikit to determine contributions of each feature to a specific class prediction”中提到的方法。我不确定我得到了什么输出。就我而言,特征重要性对整个模型“延迟相关 DMS 与建议”进行排名。我将其解释为,此变量在 0 类或 1 类中应该很重要,但从我得到的输出来看,它在两个类中都不重要。我在上面共享的 *** 中的代码还显示,当 DV 是二进制时,0 类的输出与 1 类完全相反(以符号 +/- 表示)。就我而言,两者的值不同类。

下面是图的样子:-

特征重要性 - 整体模型

特征重要性 - 0 级

特征重要性 - 1 级

我的代码的第二部分显示了累积的特征重要性,但查看 [plot] 表明没有一个变量是重要的。是我的公式错误还是我的解释错误或两者兼而有之?

情节

这是我的代码;

import pandas as pd
import numpy as np
import json
import matplotlib.pyplot as plt
%matplotlib inline

from sklearn.model_selection import train_test_split, cross_val_score
from sklearn.metrics import confusion_matrix
from sklearn.preprocessing import scale
from sklearn.ensemble import ExtraTreesClassifier


##get_ipython().run_line_magic('matplotlib', 'inline')

file = r'RCM_Binary.csv'
data = pd.read_csv()
print("data loaded successfully ...")

# Define features and target
X = data.iloc[:,:-1]
y = data.iloc[:,-1]

#split to training and testing
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=41)

# define classifier and fitting data
forest = ExtraTreesClassifier(random_state=1)
forest.fit(X_train, y_train)

# predict and get confusion matrix
y_pred = forest.predict(X_test)
cm = confusion_matrix(y_test, y_pred)
print(cm)

#Applying 10-fold cross validation
accuracies = cross_val_score(estimator=forest, X=X_train, y=y_train, cv=10)
print("accuracy (10-fold): ", np.mean(accuracies))

# Features importances
importances = forest.feature_importances_
std = np.std([tree.feature_importances_ for tree in forest.estimators_],
             axis=0)
indices = np.argsort(importances)[::-1]
feature_list = [X.columns[indices[f]] for f in range(X.shape[1])]  #names of features.
ff = np.array(feature_list)

# Print the feature ranking
print("Feature ranking:")

for f in range(X.shape[1]):
    print("%d. feature %d (%f) name: %s" % (f + 1, indices[f], importances[indices[f]], ff[indices[f]]))


# Plot the feature importances of the forest
plt.figure()
plt.rcParams['figure.figsize'] = [16, 6]
plt.title("Feature importances")
plt.bar(range(X.shape[1]), importances[indices],
       color="r", yerr=std[indices], align="center")
plt.xticks(range(X.shape[1]), ff[indices], rotation=90)
plt.xlim([-1, X.shape[1]])
plt.show()


## The new additions to get feature importance to classes: 

# To get the importance according to each class:
def class_feature_importance(X, Y, feature_importances):
    N, M = X.shape
    X = scale(X)

    out = 
    for c in set(Y):
        out[c] = dict(
            zip(range(N), np.mean(X[Y==c, :], axis=0)*feature_importances)
        )

    return out

result = class_feature_importance(X, y, importances)
print (json.dumps(result,indent=4))

# Plot the feature importances of the forest

titles = ["Did not Divert", "Diverted"]
for t, i in zip(titles, range(len(result))):
    plt.figure()
    plt.rcParams['figure.figsize'] = [16, 6]
    plt.title(t)
    plt.bar(range(len(result[i])), result[i].values(),
           color="r", align="center")
    plt.xticks(range(len(result[i])), ff[list(result[i].keys())], rotation=90)
    plt.xlim([-1, len(result[i])])
    plt.show()

第二部分代码

# List of tuples with variable and importance
feature_importances = [(feature, round(importance, 2)) for feature, importance in zip(feature_list, importances)]
# Sort the feature importances by most important first
feature_importances = sorted(feature_importances, key = lambda x: x[1], reverse = True)
# Print out the feature and importances 
[print('Variable: :20 Importance: '.format(*pair)) for pair in feature_importances]

# list of x locations for plotting
x_values = list(range(len(importances)))
# Make a bar chart
plt.bar(x_values, importances, orientation = 'vertical', color = 'r', edgecolor = 'k', linewidth = 1.2)
# Tick labels for x axis
plt.xticks(x_values, feature_list, rotation='vertical')
# Axis labels and title
plt.ylabel('Importance'); plt.xlabel('Variable'); plt.title('Variable Importances');


# List of features sorted from most to least important
sorted_importances = [importance[1] for importance in feature_importances]
sorted_features = [importance[0] for importance in feature_importances]
# Cumulative importances
cumulative_importances = np.cumsum(sorted_importances)
# Make a line graph
plt.plot(x_values, cumulative_importances, 'g-')
# Draw line at 95% of importance retained
plt.hlines(y = 0.95, xmin=0, xmax=len(sorted_importances), color = 'r', linestyles = 'dashed')
# Format x ticks and labels
plt.xticks(x_values, sorted_features, rotation = 'vertical')
# Axis labels and title
plt.xlabel('Variable'); plt.ylabel('Cumulative Importance'); plt.title('Cumulative Importances');
plt.show()
# Find number of features for cumulative importance of 95%
# Add 1 because Python is zero-indexed
print('Number of features for 95% importance:', np.where(cumulative_importances > 0.95)[0][0] + 1)

【问题讨论】:

欢迎来到 ***。为了快速获得好的答案,请在您的帖子中添加Minimal, Complete, and Verifiable Example。在您的情况下,没有可以使用的起始数据,因此其他人只能检查您的代码。在这种情况下,代码的所有图形部分都不是必需的,只会让人很难弄清楚发生了什么。如果这些图表能说明您的观点,那么您需要提供一些可以通过您的代码加载的示例数据,这些数据代表您正在使用的实际数据。 @andrew_reece 我编辑了我的帖子以反映您的观点。作为一个新用户,我只能添加链接而不是图像。我还分解了代码,以便更容易检查代码。 【参考方案1】:

这个问题可能已经过时了,但以防万一有人感兴趣:

您从源代码中复制的class_feature_importance 函数使用线条作为特征,并使用列作为示例,而您则像大多数人一样使用相反的方法。因此,每类特征重要性的计算出错了。将代码更改为

zip(range(M))

应该解决它。

【讨论】:

【参考方案2】:

还要确保您的 y 变量不是数组。 如果它是一个数组,你可以使用

np.mean(X[Y==c])

【讨论】:

以上是关于使用 Scikit-learn 确定 RF 模型中每个类别的特征重要性的主要内容,如果未能解决你的问题,请参考以下文章

如何在 scikit-learn 的管道中对变换参数进行网格搜索

scikit-learn随机森林调参小结

转载:scikit-learn随机森林调参小结

在 scikit-learn 中平均多个随机森林模型

scikit-learn随机森林调参小结

scikit-learn 中的参数 oob_score_ 等于准确度还是错误?