Spark MultilayerPerceptronClassifier Scala实现例子及优化算法理解
Posted Maggie张张
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Spark MultilayerPerceptronClassifier Scala实现例子及优化算法理解相关的知识,希望对你有一定的参考价值。
下面是Scala语言调用Spark MLPC的一个小Demo,需要注意的是MLPC既能处理二分类,又能处理多分类,layers参数的第一层神经元个数必须等于数据集中的特征数,最后一层神经元个数必须等于数据集中的类别数。
package com.spark
import org.apache.spark.ml.classification.MultilayerPerceptronClassifier
import org.apache.spark.ml.evaluation.MulticlassClassificationEvaluator
import org.apache.spark.ml.feature.VectorAssembler
import org.apache.spark.sql.SparkSession
object MultilayerPerceptron
val spark: SparkSession = SparkSession
.builder()
.master("local")
.appName("MultiLayer Perceptron Classifier")
.config("spark.some.config.option", "some-value")
.getOrCreate()
import spark.implicits._
def main(args: Array[String]): Unit =
// 读取少量的Iris数据集数据
val dataList: List[(Double, Double, Double, Double, String)] = List(
(5.1,3.5,1.4,0.2,"Iris-setosa"),
(4.9,3.0,1.4,0.2,"Iris-setosa"),
(4.7,3.2,1.3,0.2,"Iris-setosa"),
(4.6,3.1,1.5,0.2,"Iris-setosa"),
(5.0,3.6,1.4,0.2,"Iris-setosa"),
(7.0,3.2,4.7,1.4,"Iris-versicolor"),
(6.4,3.2,4.5,1.5,"Iris-versicolor"),
(6.9,3.1,4.9,1.5,"Iris-versicolor"),
(6.1,2.8,4.0,1.3,"Iris-versicolor"),
(6.3,2.5,4.9,1.5,"Iris-versicolor"),
(6.3,3.3,6.0,2.5,"Iris-virginica"),
(6.3,2.9,5.6,1.8,"Iris-virginica"),
(6.5,3.0,5.8,2.2,"Iris-virginica"),
(7.6,3.0,6.6,2.1,"Iris-virginica"),
(4.9,2.5,4.5,1.7,"Iris-virginica"),
(7.3,2.9,6.3,1.8,"Iris-virginica"),
(6.7,2.5,5.8,1.8,"Iris-virginica")
)
// 转化成DataFrame
val colArray: Array[String] = Array("sepal_length", "sepal_width", "petal_length", "petal_width", "iris")
val data = dataList.toDF(colArray:_*)
data.createOrReplaceTempView("data")
val label = "case iris when 'Iris-setosa' then 0 when 'Iris-versicolor' then 1 else 2 end as label"
val dataDF = spark.sql(s"select sepal_length, sepal_width, petal_length, petal_width, $label from data")
// 将特征列转换为向量
val features = colArray.slice(0,4)
val assembler = new VectorAssembler().setInputCols(features).setOutputCol("features")
val vecDF = assembler.transform(dataDF)
dataDF.show(10)
vecDF.show(10)
val splits = vecDF.randomSplit(Array(0.6, 0.4), seed = 1234L)
val trainDF = splits(0)
val testDF = splits(1)
// MLPC的默认参数:优化器l-bfgs,maxIter=100, stepSize(针对miniBatch gd)=0.03, 容差tol=1e-6
val layers = Array[Int](4, 5, 3, 3)
val trainer = new MultilayerPerceptronClassifier().setFeaturesCol("features").setLabelCol("label").setLayers(layers)
.setSolver("gd")
.setStepSize(0.3)
.setMaxIter(1000)
val model = trainer.fit(trainDF)
val result = model.transform(testDF)
result.show(6)
val predictionAndLabels = result.select("prediction", "label")
val evaluator = new MulticlassClassificationEvaluator().setPredictionCol("prediction").setLabelCol("label").setMetricName("accuracy")
println(evaluator.evaluate(predictionAndLabels))
MLPC可选两种优化器,minibatch gradient descent和l-bfgs,默认是l-bfgs,因为相比较而言它可以快速收敛。
这里记录一些概念:
导数:一元函数y=f(x)当x沿着正方向变化时y的变化率
偏导数:多元函数y=f(x1, x2…)当某一元沿着其正方向变化时y的变化率
方向导数:导数和方向导数讨论的都是函数值在自变量沿着坐标轴变化的变化率,但是我们仍然想知道函数值随着某一个方向的变化率,这个方向可能不是坐标轴,这就是方向导数。
梯度:是一个向量,方向和最大方向导数的方向一致,大小是方向导数的最大值。所以,梯度就是函数方向导数变化率最大的那个。
todo:l-bfgs
参考资料:
- [机器学习] ML重要概念:梯度(Gradient)与梯度下降法(Gradient Descent)https://blog.csdn.net/walilk/article/details/50978864
- Mini-Batch Gradient Descent介绍以及如何决定Batch Size https://blog.csdn.net/xiang_freedom/article/details/78395145
- 数值优化:理解L-BFGS算法 http://www.hankcs.com/ml/l-bfgs.html
以上是关于Spark MultilayerPerceptronClassifier Scala实现例子及优化算法理解的主要内容,如果未能解决你的问题,请参考以下文章