获取转换后逻辑回归最重要特征的名称
Posted
技术标签:
【中文标题】获取转换后逻辑回归最重要特征的名称【英文标题】:Get names of the most important features for Logistic Regression after transformation 【发布时间】:2021-12-26 16:47:04 【问题描述】:我想。
columns_for_encoding = ['a', 'b', 'c', 'd', 'e', 'f',
'g','h','i','j','k','l', 'm',
'n', 'o', 'p']
columns_for_scaling = ['aa', 'bb', 'cc', 'dd', 'ee']
transformerVectoriser = ColumnTransformer(transformers=[('Vector Cat', OneHotEncoder(handle_unknown = "ignore"), columns_for_encoding),
('Normalizer', Normalizer(), columns_for_scaling)],
remainder='passthrough')
我知道我可以做到:
x_train, x_test, y_train, y_test = train_test_split(features, results, test_size = 0.2, random_state=42)
x_train = transformerVectoriser.fit_transform(x_train)
x_test = transformerVectoriser.transform(x_test)
clf = LogisticRegression(max_iter = 5000, class_weight = 1: 3.5, 0: 1)
model = clf.fit(x_train, y_train)
importance = model.coef_[0]
# summarize feature importance
for i,v in enumerate(importance):
print('Feature: %0d, Score: %.5f' % (i,v))
# plot feature importance
pyplot.bar([x for x in range(len(importance))], importance)
pyplot.show()
但是有了这个,我得到了 feature1、feature2、feature3...等。 转换后,我有大约 45k 个特征。
如何获取最重要功能的列表(转换前)? 我想知道该模型的最佳功能是什么。我有很多具有 100 多个不同类别的分类特征,因此在编码后,我的数据集中的特征多于行。所以我想找出我可以从我的数据集中排除哪些特征以及哪些特征最适合我的模型。
重要
我还有其他使用但未转换的功能...因为我把remainder='passthrough'
【问题讨论】:
我想知道该模型的最佳功能是什么。我有很多具有 100 多个不同类别的分类特征,因此在编码后,我的数据集中的特征多于行。所以我想找出我可以从我的数据集中排除哪些特征以及哪些特征最适合我的模型 【参考方案1】:正如您已经知道的那样,对于LogisticRegression
的情况,特征重要性 的整个想法有点棘手。您可以从这些帖子中了解更多信息:
-
How to find the importance of the features for a logistic regression model?
Feature Importance in Logistic Regression for Machine Learning Interpretability
How to Calculate Feature Importance With Python
我个人认为这些和其他类似的帖子没有定论,因此我将在我的回答中避免这部分,并解决您关于功能拆分和聚合 功能 重要性的主要问题(假设它们可用于拆分功能)使用RandomForestClassifier
。我还假设父功能的 重要性 是子功能的总和。
在这些假设下,我们可以使用下面的代码来获得原始特征的重要性。我使用Palmer Archipelago (Antarctica) penguin data 进行说明。
df = pd.read_csv('./data/penguins_size.csv')
df = df.dropna()
# to comply with the assumption later that column names don't contain _
df.columns = [c.replace('_', '-') for c in df.columns]
X = df.iloc[:, :-1]
y = np.asarray(df.iloc[:, 6] == 'MALE').astype(int)
pd.options.display.width = 0
print(X.head())
species | island | culmen-length-mm | culmen-depth-mm | flipper-length-mm | body-mass-g |
---|---|---|---|---|---|
Adelie | Torgersen | 39.1 | 18.7 | 181.0 | 3750.0 |
Adelie | Torgersen | 39.5 | 17.4 | 186.0 | 3800.0 |
Adelie | Torgersen | 40.3 | 18.0 | 195.0 | 3250.0 |
Adelie | Torgersen | 36.7 | 19.3 | 193.0 | 3450.0 |
Adelie | Torgersen | 39.3 | 20.6 | 190.0 | 3650.0 |
columns_for_encoding = ['species', 'island']
columns_for_scaling = ['culmen-length-mm', 'culmen-depth-mm']
transformerVectoriser = ColumnTransformer(transformers=[('Vector Cat', OneHotEncoder(handle_unknown="ignore"), columns_for_encoding), ('Normalizer', Normalizer(), columns_for_scaling)], remainder='passthrough')
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
x_train = transformerVectoriser.fit_transform(x_train)
x_test = transformerVectoriser.transform(x_test)
clf = RandomForestClassifier(max_depth=5)
model = clf.fit(x_train, y_train)
importance = model.feature_importances_
# feature names derived from the encoded columns and their individual importances
# encoded cols
enc_col_out = transformerVectoriser.named_transformers_['Vector Cat'].get_feature_names_out()
enc_col_out_imp = importance[transformerVectoriser.output_indices_['Vector Cat']]
# normalized cols
norm_col = transformerVectoriser.named_transformers_['Normalizer'].feature_names_in_
norm_col_imp = importance[transformerVectoriser.output_indices_['Normalizer']]
# remainder cols, require a quick lookup as no transformer object exists for this case
rem_cols = []
for (tname, _, cs) in transformerVectoriser.transformers_:
if tname == 'remainder': rem_cols = X.columns[cs]; break
rem_col_imp = importance[transformerVectoriser.output_indices_['remainder']]
# storing them in a df for easy manipulation
imp_df = pd.DataFrame('feature': (list(enc_col_out) + list(norm_col) + list(rem_cols)), 'importance': (list(enc_col_out_imp) + list(norm_col_imp) + list(rem_col_imp)))
# aggregating, assuming that column names don't contain _ just to keep it simple
imp_df['feature'] = imp_df['feature'].apply(lambda x: x.split('_')[0])
imp_agg = imp_df.groupby(by=['feature']).sum()
print(imp_agg)
print(f'Sum of feature importances: imp_df["importance"].sum()')
输出:
【讨论】:
好的,但是所有其他功能呢?您的输出中只有岛屿和物种,其他所有功能呢? 我忽略了它们,因为它们与导数具有一对一的映射关系,但是,我发现包含它们会很有用。现在我们也有了。 谢谢!一个问题,如果我得到的重要性分数小于 0(负值)怎么办? 你得到的分数是回归系数,虽然它们表示重要性,但它们本身并不是重要性。您可以尝试使用一些可靠的方法来计算重要性(您可以尝试使用我分享的链接#2,但我不推荐它)。一旦你有了重要性数字,它们就永远不会是负数。为了尝试一些原始的东西,您也可以忽略系数的符号并将其用作重要性(任一侧的系数越大,特征越重要)。在这种情况下,请确保至少处理特征差异。以上是关于获取转换后逻辑回归最重要特征的名称的主要内容,如果未能解决你的问题,请参考以下文章