sparksql系列 Json转Map,多文件生成
Posted Kotlin
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了sparksql系列 Json转Map,多文件生成相关的知识,希望对你有一定的参考价值。
一:JSON转Map
为什需要将JSON转Map
公司里面产品很多,上报的数据很多,格式极其不规范同名的事情是常有的,对于解析来说是非常困难的,需要统一的脚本把字段解析出来。
上报的数据类似:{"id":"7","sex":"7","data":{"sex":"13","class":"7"}}
jar包导入
我们使用fastjson来将json处理成Map的数据结构
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.47</version>
</dependency>
数据
{"id":"7","sex":"7","da","data":{"name":"7","class":"7","data":{"name":"7","class":"7"}}}
{"id":"8","name":"8","data":{"sex":"8","class":"8"},"data":{"sex":"8","class":"8"}}
{"class":"9","data":{"name":"9","sex":"9"}}
{"id":"10","name":"10","data":{"sex":"10","class":"10"}}
{"id":"11","class":"11","data":{"name":"11","sex":"11"}}
代码
import org.apache.spark.sql.SparkSession
import com.alibaba.fastjson.JSON
import java.util
//我们把例子中的id单独提取出来,将其余字段保留到extends里面
val sparkSession= SparkSession.builder().master("local").getOrCreate()
val nameRDD1df = sparkSession.read.textFile("/software/java/idea/data")
import sparkSession.implicits._
import org.apache.spark.sql.functions.col
val finalResult = nameRDD1df.map(x=>{
var map:util.HashMap[String, Object] = new util.HashMap[String, Object]()
try{
map = JSON.parseObject(x, classOf[util.HashMap[String, Object]])
}catch {case e :Exception =>{ println(e.printStackTrace())}}
var finalMap:util.HashMap[String, Object] = if(map.containsKey("data")){
var dataMap:util.HashMap[String, Object] = new util.HashMap[String, Object]()
try{
dataMap = JSON.parseObject(map.get("data").toString, classOf[util.HashMap[String, Object]])
}catch {case e :Exception =>{ println(e.printStackTrace())}}
dataMap.putAll(map);dataMap.remove("id");dataMap.remove("data");
dataMap
}else {new util.HashMap[String, Object]()}
val id = if(map.get("id") == null) "" else map.get("id").toString
(id,JSON.toJSONString(finalMap,false))
})
.toDF("id","extends")
.filter(col("id") =!= "")
finalResult.show(10,false)
二:多文件生成
sparksql--->>>partitionBy
import org.apache.spark.sql.SparkSession
val sparkSession= SparkSession.builder().master("local").getOrCreate()
val nameRDD1df = sparkSession.read.json("/software/java/idea/data")
.select("id","name")
.write.mode(SaveMode.Append).partitionBy("id")
.json("/software/java/idea/end")
spark-core--->>>自定义函数
import org.apache.spark.sql.SparkSession
import org.apache.hadoop.fs.{FileSystem, Path}
val sparkSession= SparkSession.builder().master("local").getOrCreate()
val sparkContext = sparkSession.sparkContext
val fileSystem = FileSystem.get(sparkContext.hadoopConfiguration)
fileSystem.delete(new Path("/software/java/idea/end"), true)
sparkContext.textFile("/software/java/idea/data").map(x=>{
val array = x.split("\|")
((array(0)+"="+array(1)),array(2))
}).saveAsHadoopFile("/software/java/idea/end",classOf[String],classOf[String],classOf[RDDMultipleTextOutputFormat[_, _]])
import org.apache.hadoop.mapred.lib.MultipleTextOutputFormat
class RDDMultipleTextOutputFormat[K, V]() extends MultipleTextOutputFormat[K, V]() {
override def generateFileNameForKeyValue(key: K, value: V, name: String) : String = {
(key + "/" + name)
}
}
以上是关于sparksql系列 Json转Map,多文件生成的主要内容,如果未能解决你的问题,请参考以下文章
Java编程系列存储对象的Map集合转成Json,再从Json转换回原Map集合!哎哟,不错哦
Java编程系列存储对象的Map集合转成Json,再从Json转换回原Map集合!哎哟,不错哦