如何使用带有熊猫的随机森林来使用特征重要性?
Posted
技术标签:
【中文标题】如何使用带有熊猫的随机森林来使用特征重要性?【英文标题】:How to use feature importance using random forest with pandas? 【发布时间】:2020-04-02 08:40:05 【问题描述】:我是新来的。我很高兴知道您对我的问题的建议。我需要知道我的数据集的哪些特征最重要。所以,我使用了SelectFromModel(RandomForestClassifier(n_estimators = 100))
,但问题是我无法选择在我的数据库中最重要的列。
我应该使用selected_feat= X_train.columns[(sel.get_support())]
,但问题是 numpy; numpy 不允许我使用X_train.columns[]
。我曾尝试使用selected_feat= pd.DataFrame(columns=[(sel.get_support())])
,但效果不佳。
有没有人可以解决? 数据集有 84 列,它们都是数字的。 part of my data-set
# -*- coding: utf-8 -*-
"""
Created on Sat Nov 23 11:42:37 2019
@author: Jacke
"""
from pandas import pandas as pd
from pandas import DataFrame
from numpy import*
import numpy as np
from matplotlib import pyplot as plt
from sklearn.model_selection import GridSearchCV,train_test_split
from sklearn.metrics import confusion_matrix,accuracy_score,roc_curve,auc
from sklearn.neural_network import MLPClassifier
from sklearn import preprocessing
from sklearn.preprocessing import LabelEncoder
from keras.models import Sequential
from keras.layers import Dense
from keras.utils import to_categorical
from sklearn.feature_selection import SelectFromModel
from sklearn.ensemble import RandomForestClassifier
##########################################################################################
db = pd.read_csv(r"C:\Users\Jacke\Desktop\proposal\code\***\Test_F_Importance.csv")
X = db.iloc[:, 0:83]
y = db.iloc[:, 83]
m, n = X.shape
X = preprocessing.scale(X)
encoder = LabelEncoder()
encoder.fit(y)
encoded_y = encoder.transform(y)
y = to_categorical(encoded_y)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3)
##########################################################################################
mlp = MLPClassifier()
parameter_space = 'hidden_layer_sizes': [(83,83,10), (20,40,20), (15,15,15)],
'activation': ['tanh', 'relu'],
'solver': ['sgd', 'adam'],
'alpha': [0.001,0.01, 0.05, 0.1],
'learning_rate': ['constant','adaptive'],
'max_iter':[20,50,100]
clf = GridSearchCV(mlp, parameter_space, n_jobs=-1, cv=3,return_train_score=True)
clf.fit(X_train, y_train)
print('Best parameters found:\n', clf.best_params_,clf.best_score_)
#########################################################################################
cvr = clf.cv_results_
df = DataFrame(cvr)
scores = df['mean_test_score']
h = df['param_hidden_layer_sizes']
alpha = df['param_alpha']
optim = df['param_solver']
l_rate = df['param_learning_rate']
activ = df['param_activation']
itr = df['param_max_iter']
dh = DataFrame('Scores': scores,'Itraction':itr, 'Hidden_Layers': h, 'alpha': alpha ,
'Solver':optim, 'Learning_Rate':l_rate, 'Activation':activ)
##########################################################################################
model = Sequential()
model.add(Dense(83, input_dim=n, kernel_initializer='uniform', activation='tanh'))
model.add(Dense(83, activation='tanh'))
model.add(Dense(10, activation='tanh'))
model.add(Dense(2, activation='sigmoid'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
##########################################################################################
m = model.fit(X_train, y_train, batch_size = 10, epochs = 100, validation_split=0.5)
scoress = model.evaluate(X, y, verbose=0)
print("%s: %.2f%%" % (model.metrics_names[1], scoress[1]*100))
# save model and architecture to single file
model.save("model.h5")
model.save_weights("model.h5")
print("Saved model to disk")
########################Feature_Importance################################################
sel = SelectFromModel(RandomForestClassifier(n_estimators = 100))
sel.fit(X_train, y_train)
# selected_feat= X_train.columns[(sel.get_support())]
selected_feat= pd.DataFrame(columns=[(sel.get_support())])
len(selected_feat)
print(selected_feat)
########################################################################################
# Plot training & validation accuracy values
plt.plot(m.history['acc'])
plt.plot(m.history['val_acc'])
plt.title('Training vs Test accuracy , DA')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Training acc', 'Validation acc'], loc='best')
#plt.show()
#plt.figure()
a = plt.savefig('Accuracy.png', dpi=300, bbox_inches='tight')
plt.close(a)
# Plot training & validation loss values
plt.plot(m.history['loss'])
plt.plot(m.history['val_loss'])
plt.title('Training vs Test Loss , DA')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Training loss', 'Validation loss'], loc='best')
#plt.show()
#plt.figure()
b = plt.savefig('Loss.png', dpi=300, bbox_inches='tight')
plt.close(b)
##########################################################################################
y_score = model.predict(X_test)
org = zeros((y_test.shape[0]))
prd = zeros((y_score.shape[0]))
def decode(datum):
return np.argmax(datum)
for i in range(y_score.shape[0]):
prd[i] = decode(y_score[i])
for j in range(y_test.shape[0]):
org[j] = decode(y_test[j])
confusion_matrix(org,prd)
print("Accuracy of MLP: ", "\n", confusion_matrix(org,prd))
f = open("output.txt", "a")
print('Accuracy Score : ' + str(accuracy_score(org,prd)), file=f)
f.close()
##########################################################################################
#model = ExtraTreesClassifier()
#model.fit(X,y)
#print(model.feature_importances_) #use inbuilt class feature_importances of tree based classifiers
##plot graph of feature importances for better visualization
#feat_importances = pd.Series(model.feature_importances_, index=X.columns)
#feat_importances.nlargest(n).plot(kind='barh')
#plt.show()
##########################################################################################
def generate_results(y_test, y_score):
fpr, tpr, _ = roc_curve(y_test, y_score)
roc_auc = auc(fpr, tpr)
plt.figure()
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.05])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Receiver operating characteristic curve')
#plt.show()
plt.savefig('False and True comparison.png', dpi=300, bbox_inches='tight')
print('AUC: %f' % roc_auc)
#model = load_model('model.h5')
#model.summary()
#print("Accuracy of MLP: ", "\n", confusion_matrix(y_score,y_test))
print('Generating results')
generate_results(y_test[:, 0], y_score[:, 0])
这是我的 Python 代码。 here is my results
【问题讨论】:
【参考方案1】:get_support 方法返回一个掩码 - 即适用于您的列的 TRUE 和 FALSE 值数组:
因此,您正在创建一个空数据框,其中包含名为 true 和 false 的列。 如果您想要一个包含重要列名称的列表,请运行:
[elem for elem in X_train.columns[sel.get_support]]
希望对您有所帮助!
【讨论】:
谢谢 Davide,但如果我使用 X_train.columns[],我会遇到问题。这是错误:AttributeError:'numpy.ndarray'对象没有属性'columns' 让它们成为 DF。使用 train_test_split 后,运行X_train = pd.DataFrame(X_train, columns = X.columns)
等等。如果对 X_train (/y/test) 的任何后续调用需要一个数组,只需调用 .values 即可访问 ndarray 内【参考方案2】:
X_train.columns[sel.get_support]] 它如何从全部特征中只选择真正的列。即 get_support 方法返回一个掩码 - 即适用于您的列的 TRUE 和 FALSE 值数组:
【讨论】:
请尝试更好地解释自己。使用“即”少。 @OphirCarmi:我同意纳杰姆应该格式化他的答案。无论如何,这不是拼写比赛,审查第一篇文章并不是为了让你讲授像“i-e”这样的写作风格。有时我觉得人们在审阅时忘记了这样做的意义是为了激发更好的答案,而不是指出你不喜欢的写作风格。以上是关于如何使用带有熊猫的随机森林来使用特征重要性?的主要内容,如果未能解决你的问题,请参考以下文章