如何在pyspark中打印具有特征名称的随机森林的决策路径?
Posted
技术标签:
【中文标题】如何在pyspark中打印具有特征名称的随机森林的决策路径?【英文标题】:How to print the decision path of a random forest with feature names in pyspark? 【发布时间】:2018-08-01 13:45:57 【问题描述】:如何修改代码以打印带有特征名称而非数字的决策路径。
import pandas as pd
import pyspark.sql.functions as F
from pyspark.ml import Pipeline, Transformer
from pyspark.sql import DataFrame
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark.ml.feature import VectorAssembler
data = pd.DataFrame(
'ball': [0, 1, 2, 3],
'keep': [4, 5, 6, 7],
'hall': [8, 9, 10, 11],
'fall': [12, 13, 14, 15],
'mall': [16, 17, 18, 10],
'label': [21, 31, 41, 51]
)
df = spark.createDataFrame(data)
assembler = VectorAssembler(
inputCols=['ball', 'keep', 'hall', 'fall'], outputCol='features')
dtc = DecisionTreeClassifier(featuresCol='features', labelCol='label')
pipeline = Pipeline(stages=[assembler, dtc]).fit(df)
transformed_pipeline = pipeline.transform(df)
ml_pipeline = pipeline.stages[1]
print(ml_pipeline.toDebugString)
输出:
DecisionTreeClassificationModel (uid=DecisionTreeClassifier_48b3a34f6fb1f1338624) of depth 3 with 7 nodes If (feature 0 <= 0.5) Predict: 21.0 Else (feature 0 >
0.5) If (feature 0 <= 1.5)
Predict: 31.0 Else (feature 0 > 1.5)
If (feature 0 <= 2.5)
Predict: 41.0
Else (feature 0 > 2.5)
Predict: 51.0
【问题讨论】:
【参考方案1】:一种选择是手动替换字符串中的文本。为此,我们可以将作为inputCols
传递的值存储在列表input_cols
中,然后每次将模式feature i
替换为列表i
的第input_cols
元素。
import pyspark.sql.functions as F
from pyspark.ml import Pipeline, Transformer
from pyspark.sql import DataFrame
from pyspark.ml.classification import DecisionTreeClassifier
from pyspark.ml.feature import VectorAssembler
import pandas as pd
data = pd.DataFrame(
'ball': [0, 1, 2, 3],
'keep': [4, 5, 6, 7],
'hall': [8, 9, 10, 11],
'fall': [12, 13, 14, 15],
'mall': [16, 17, 18, 10],
'label': [21, 31, 41, 51]
)
df = spark.createDataFrame(data)
input_cols = ['ball', 'keep', 'hall', 'fall']
assembler = VectorAssembler(
inputCols=input_cols, outputCol='features')
dtc = DecisionTreeClassifier(featuresCol='features', labelCol='label')
pipeline = Pipeline(stages=[assembler, dtc]).fit(df)
transformed_pipeline = pipeline.transform(df)
ml_pipeline = pipeline.stages[1]
string = ml_pipeline.toDebugString
for i, feat in enumerate(input_cols):
string = string.replace('feature ' + str(i), feat)
print(string)
输出:
DecisionTreeClassificationModel (uid=DecisionTreeClassifier_4eb084167f2ed4b671e8) of depth 3 with 7 nodes
If (ball <= 0.0)
Predict: 21.0
Else (ball > 0.0)
If (ball <= 1.0)
Predict: 31.0
Else (ball > 1.0)
If (ball <= 2.0)
Predict: 41.0
Else (ball > 2.0)
Predict: 51.0
希望这会有所帮助!
【讨论】:
你是明星!谢谢你。如果您想看看***.com/questions/51614077/…,我在这里发布了一个类似的问题 @Matthew 很高兴我能提供帮助。遗憾的是,我不知道解决那里所述问题的简单方法。我唯一要做的就是制作一些正则表达式来提取规则,然后评估条件。但这听起来很复杂。【参考方案2】:@Florian:当特征数量很大(超过 9 个)时,上面的代码将不起作用。相反,请使用正则表达式使用以下内容。
tree_to_json = mod.stages[-1].toDebugString
for (index, feat) in index_feature_name_tuple:
pattern = '\((?P<index>feature ' + str(index) + ')' + ' (?P<rest>.*)\)'
tree_to_json = re.sub(pattern, f'(feat \g<rest>)', tree_to_json)
print(tree_to_json)
tree_to_json
是原始规则,应转换为具有特征名称的规则。 index_feature_name_tuple
是元组列表,其中每个元组的第一个元素是特征的索引,第二个元素代表特征的名称。您可以使用以下脚本获取index_feature_name_tuple
:
df_fitted.schema['features'].metadata["ml_attr"]["attrs"]
其中df_fitted
是在将管道安装到数据框后转换的数据框。
【讨论】:
以上是关于如何在pyspark中打印具有特征名称的随机森林的决策路径?的主要内容,如果未能解决你的问题,请参考以下文章
pyspark GBTRegressor 特征重要度 及排序