用Spark读取庞大的CSV文件
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了用Spark读取庞大的CSV文件相关的知识,希望对你有一定的参考价值。
我有一个27GB的gz csv文件,我想用Spark来读取,我们最大的节点有30GB的内存,当我试图读取文件时,只有一个执行器在加载数据(我正在监控内存和网络),其他4个执行器都是陈旧的。
当我试图读取文件时,只有一个执行器在加载数据(我正在监控内存和网络),其他4个执行器都是陈旧的。
过了一会儿就因为内存问题而崩溃了。有什么办法可以并行读取这个文件吗?
Dataset<Row> result = sparkSession.read()
.option("header","true")
.option("escape", """)
.option("multiLine","true")
.format("csv")
.load("s3a://csv-bucket");
result.repartition(10)
spark_conf:
spark.executor.memoryOverhead: "512"
spark.executor.cores: "5"
driver:
memory: 10G
executor:
instances: 5
memory: 30G
答案
当涉及到庞大的数据时,你必须对数据进行重新分区。
在火花中,并行的单位是分区
Dataset<Row> result = sparkSession.read()
.option("header","true")
.option("escape", """)
.option("multiLine","true")
.format("csv")
.load("s3a://csv-bucket");
result.repartition(5 * 5 *3) ( number of executors i.e.5 * cores i.e. 5 * replicationfactor i.e. 2-3) i.e. 25 might be working for you to ensure uniform disribution data.
交叉检查每个分区有多少条记录。 import org.apache.spark.sql.functions.spark_partition_id
yourcsvdataframe.groupBy(spark_partition_id).count.show()
例子 :
val mycsvdata =
"""
|rank,freq,Infinitiv,Unreg,Trans,"Präsens_ich","Präsens_du","Präsens_er, sie, es","Präteritum_ich","Partizip II","Konjunktiv II_ich","Imperativ Singular","Imperativ Plural",Hilfsverb
|3,3796784,sein,"","",bin,bist,ist,war,gewesen,"wäre",sei,seid,sein
|8,1618550,haben,"","",habe,hast,hat,hatte,gehabt,"hätte",habe,habt,haben
|10,1379496,einen,"","",eine,einst,eint,einte,geeint,einte,eine,eint,haben
|12,948246,werden,"","",werde,wirst,wird,wurde,geworden,"würde",werde,werdet,sein
""".stripMargin.lines.toList.toDS
val csvdf: DataFrame = spark.read.option("header", true)
.option("header", true)
.csv(mycsvdata)
csvdf.show(false)
println("all the 4 records are in single partition 0 ")
import org.apache.spark.sql.functions.spark_partition_id
csvdf.groupBy(spark_partition_id).count.show()
println( "now divide data... 4 records to 2 per partition")
csvdf.repartition(2).groupBy(spark_partition_id).count.show()
结果 :
+----+-------+---------+-----+-----+-----------+----------+-------------------+--------------+-----------+-----------------+------------------+----------------+---------+
|rank|freq |Infinitiv|Unreg|Trans|Präsens_ich|Präsens_du|Präsens_er, sie, es|Präteritum_ich|Partizip II|Konjunktiv II_ich|Imperativ Singular|Imperativ Plural|Hilfsverb|
+----+-------+---------+-----+-----+-----------+----------+-------------------+--------------+-----------+-----------------+------------------+----------------+---------+
|3 |3796784|sein |null |null |bin |bist |ist |war |gewesen |wäre |sei |seid |sein |
|8 |1618550|haben |null |null |habe |hast |hat |hatte |gehabt |hätte |habe |habt |haben |
|10 |1379496|einen |null |null |eine |einst |eint |einte |geeint |einte |eine |eint |haben |
|12 |948246 |werden |null |null |werde |wirst |wird |wurde |geworden |würde |werde |werdet |sein |
+----+-------+---------+-----+-----+-----------+----------+-------------------+--------------+-----------+-----------------+------------------+----------------+---------+
all the 4 records are in single partition 0
+--------------------+-----+
|SPARK_PARTITION_ID()|count|
+--------------------+-----+
| 0| 4|
+--------------------+-----+
now divide data... 4 records to 2 per partition
+--------------------+-----+
|SPARK_PARTITION_ID()|count|
+--------------------+-----+
| 1| 2|
| 0| 2|
+--------------------+-----+
以上是关于用Spark读取庞大的CSV文件的主要内容,如果未能解决你的问题,请参考以下文章
python 读取多个csv文件中某一列,并生成一个新csv文件
Spark 使用 Data Frame 读取 CSV 文件并从 PostgreSQL DB 查询