Spark MLlib速成宝典模型篇05决策树Decision Tree(Python版)
Posted 黎明程序员
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Spark MLlib速成宝典模型篇05决策树Decision Tree(Python版)相关的知识,希望对你有一定的参考价值。
目录
决策树原理
决策树代码(Spark Python)
决策树原理 |
详见博文:http://www.cnblogs.com/itmorn/p/7918797.html
决策树代码(Spark Python) |
代码里数据:https://pan.baidu.com/s/1jHWKG4I 密码:acq1
# -*-coding=utf-8 -*- from pyspark import SparkConf, SparkContext sc = SparkContext(\'local\') from pyspark.mllib.tree import DecisionTree, DecisionTreeModel from pyspark.mllib.util import MLUtils # Load and parse the data file into an RDD of LabeledPoint. data = MLUtils.loadLibSVMFile(sc, \'data/mllib/sample_libsvm_data.txt\') \'\'\' 每一行使用以下格式表示一个标记的稀疏特征向量 label index1:value1 index2:value2 ... tempFile.write(b"+1 1:1.0 3:2.0 5:3.0\\\\n-1\\\\n-1 2:4.0 4:5.0 6:6.0") >>> tempFile.flush() >>> examples = MLUtils.loadLibSVMFile(sc, tempFile.name).collect() >>> tempFile.close() >>> examples[0] LabeledPoint(1.0, (6,[0,2,4],[1.0,2.0,3.0])) >>> examples[1] LabeledPoint(-1.0, (6,[],[])) >>> examples[2] LabeledPoint(-1.0, (6,[1,3,5],[4.0,5.0,6.0])) \'\'\' # Split the data into training and test sets (30% held out for testing) 分割数据集,留30%作为测试集 (trainingData, testData) = data.randomSplit([0.7, 0.3]) # Train a DecisionTree model. 训练决策树模型 # Empty categoricalFeaturesInfo indicates all features are continuous. 空的categoricalFeaturesInfo意味着所有的特征都是连续的 model = DecisionTree.trainClassifier(trainingData, numClasses=2, categoricalFeaturesInfo={}, impurity=\'gini\', maxDepth=5, maxBins=32) # Evaluate model on test instances and compute test error predictions = model.predict(testData.map(lambda x: x.features)) labelsAndPredictions = testData.map(lambda lp: lp.label).zip(predictions) testErr = labelsAndPredictions.filter( lambda lp: lp[0] != lp[1]).count() / float(testData.count()) print(\'Test Error = \' + str(testErr)) #Test Error = 0.0294117647059 print(\'Learned classification tree model:\') print(model.toDebugString()) \'\'\' DecisionTreeModel classifier of depth 2 with 5 nodes If (feature 406 <= 72.0) If (feature 100 <= 165.0) Predict: 0.0 Else (feature 100 > 165.0) Predict: 1.0 Else (feature 406 > 72.0) Predict: 1.0 \'\'\' # Save and load model 保存和加载模型 model.save(sc, "target/tmp/myDecisionTreeClassificationModel") sameModel = DecisionTreeModel.load(sc, "target/tmp/myDecisionTreeClassificationModel") print sameModel.predict(data.collect()[0].features) #0.0
以上是关于Spark MLlib速成宝典模型篇05决策树Decision Tree(Python版)的主要内容,如果未能解决你的问题,请参考以下文章
Spark MLlib速成宝典模型篇06随机森林Random Forests(Python版)
Spark MLlib速成宝典模型篇04朴素贝叶斯Naive Bayes(Python版)
Spark MLlib速成宝典模型篇02逻辑斯谛回归Logistic回归(Python版)
Spark MLlib速成宝典基础篇01Windows下spark开发环境搭建(Scala版)