Spark 机器学习------逻辑回归

Posted soyosuyang

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Spark 机器学习------逻辑回归相关的知识,希望对你有一定的参考价值。

package Spark_MLlib
import javassist.bytecode.SignatureAttribute.ArrayType

import org.apache.spark.sql.SparkSession
import org.apache.spark.ml.{Pipeline, PipelineModel}
import org.apache.spark.ml.classification.LogisticRegression
import org.apache.spark.ml.feature.{HashingTF, Tokenizer}
import org.apache.spark.ml.linalg.Vector
import org.apache.spark.sql.Row

/**
  * Spark逻辑回归的库
  * http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.ml.classification.LogisticRegression
  */
object 逻辑回归 {
      val spark=SparkSession.builder().master("local[2]").appName("逻辑回归").getOrCreate()
      import spark.implicits._
  def main(args: Array[String]): Unit = {
      val training = spark.createDataFrame(Seq((0,"soyo spark soyo1",1.0),(1,"hadoop spark",1.0),(2,"zhouhang xiaohai",0.0),(3,"hbase spark hive soyo",1.0))).
        toDF("id","text","label")

      //转换器
       val tokenizer=new Tokenizer().setInputCol("text").setOutputCol("words")
       val hashingTF=new HashingTF().setNumFeatures(1000).setInputCol(tokenizer.getOutputCol).setOutputCol("features")
       //评估器
       val lr= new LogisticRegression().setMaxIter(10). //设置最大迭代次数
         setRegParam(0.01) // 设置正则化参数
       val pipeline= new Pipeline().setStages(Array(tokenizer,hashingTF,lr))
       //训练出的模型
       val model=pipeline.fit(training)
       //测试数据
       val test= spark.createDataFrame(Seq((4,"spark i like"),(5,"hadoop spark book"),(6,"soyo9 soy 88"))).toDF("id","text")
          test.show()
//           test.createOrReplaceTempView("soyo")
//           spark.sql("").show()
       model.transform(test).schema.foreach(println)
           model.transform(test)
             .select("id","text","probability","prediction")
             .collect()
             .foreach { case Row(id: Int, text: String, prob: Vector, prediction: Double) =>
                 println(s"($id,$text)----->prob=$prob,prediction=$prediction")
               }
       //转换器生成的一些中间数据
    model.transform(test).select("id","text","features","rawPrediction")
            .collect()
               .foreach{
                 case Row(id:Int,text:String,features:Vector,rawPrediction:Vector)=>
                   println(s"id=$id,text=$text,features=$features,rawPrediction=$rawPrediction")
               }

    spark.stop()
  }
}

结果:

+---+-----------------+
| id|             text|
+---+-----------------+
|  4|     spark i like|
|  5|hadoop spark book|
|  6|     soyo9 soy 88|
+---+-----------------+

StructField(id,IntegerType,false)
StructField(text,StringType,true)
StructField(words,ArrayType(StringType,true),true)
StructField(features,[email protected],true)
StructField(rawPrediction,[email protected],true)
StructField(probability,[email protected],true)
StructField(prediction,DoubleType,true)
(4,spark i like)----->prob=[0.033501882964501836,0.9664981170354981],prediction=1.0                                准确率
(5,hadoop spark book)----->prob=[0.011175823696937707,0.9888241763030623],prediction=1.0                    准确率
(6,soyo9 soy 88)----->prob=[0.26222944363302514,0.7377705563669748],prediction=1.0                              准确率(误判了)但值较低
id=4,text=spark i like,features=(1000,[105,329,330],[1.0,1.0,1.0]),rawPrediction=[-3.3620777052692805,3.3620777052692805]
id=5,text=hadoop spark book,features=(1000,[105,181,393],[1.0,1.0,1.0]),rawPrediction=[-4.482763689867715,4.482763689867715]
id=6,text=soyo9 soy 88,features=(1000,[543,602,976],[1.0,1.0,1.0]),rawPrediction=[-1.0344130174468225,1.0344130174468225]

以上是关于Spark 机器学习------逻辑回归的主要内容,如果未能解决你的问题,请参考以下文章

Spark MLlib 机器学习

分类算法 之 逻辑回归--理论+案例+代码

掌握Spark机器学习库-07.6-线性回归实现房价预测

逻辑回归|机器学习|分类算法

温州大学《机器学习》课程代码逻辑回归

机器学习基础:理解逻辑回归及二分类多分类代码实践