java的怎么操作spark的dataframe
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了java的怎么操作spark的dataframe相关的知识,希望对你有一定的参考价值。
参考技术A t java.util.Properties;import org.apache.log4j.Logger;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.SaveMode;
public class Demo_mysql3
private static Logger logger = Logger.getLogger(Demo_Mysql2.class);
public static void main(String[] args) 本回答被提问者采纳
[Spark][Python]DataFrame select 操作例子
[Spark][Python]DataFrame中取出有限个记录的例子
的 继续
In [4]: peopleDF.select("age")
Out[4]: DataFrame[age: bigint]
In [5]: myDF=people.select("age")
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-5-b5b723b62a49> in <module>()
----> 1 myDF=people.select("age")
NameError: name ‘people‘ is not defined
In [6]: myDF=peopleDF.select("age")
In [7]: myDF.take(3)
17/10/05 05:13:02 INFO storage.MemoryStore: Block broadcast_5 stored as values in memory (estimated size 230.1 KB, free 871.7 KB)
17/10/05 05:13:02 INFO storage.MemoryStore: Block broadcast_5_piece0 stored as bytes in memory (estimated size 21.4 KB, free 893.1 KB)
17/10/05 05:13:02 INFO storage.BlockManagerInfo: Added broadcast_5_piece0 in memory on localhost:55073 (size: 21.4 KB, free: 208.7 MB)
17/10/05 05:13:02 INFO spark.SparkContext: Created broadcast 5 from take at <ipython-input-7-745486715568>:1
17/10/05 05:13:02 INFO storage.MemoryStore: Block broadcast_6 stored as values in memory (estimated size 251.1 KB, free 1144.2 KB)
17/10/05 05:13:02 INFO storage.MemoryStore: Block broadcast_6_piece0 stored as bytes in memory (estimated size 21.6 KB, free 1165.8 KB)
17/10/05 05:13:02 INFO storage.BlockManagerInfo: Added broadcast_6_piece0 in memory on localhost:55073 (size: 21.6 KB, free: 208.7 MB)
17/10/05 05:13:02 INFO spark.SparkContext: Created broadcast 6 from take at <ipython-input-7-745486715568>:1
17/10/05 05:13:03 INFO mapred.FileInputFormat: Total input paths to process : 1
17/10/05 05:13:03 INFO spark.SparkContext: Starting job: take at <ipython-input-7-745486715568>:1
17/10/05 05:13:03 INFO scheduler.DAGScheduler: Got job 2 (take at <ipython-input-7-745486715568>:1) with 1 output partitions
17/10/05 05:13:03 INFO scheduler.DAGScheduler: Final stage: ResultStage 2 (take at <ipython-input-7-745486715568>:1)
17/10/05 05:13:03 INFO scheduler.DAGScheduler: Parents of final stage: List()
17/10/05 05:13:03 INFO scheduler.DAGScheduler: Missing parents: List()
17/10/05 05:13:03 INFO scheduler.DAGScheduler: Submitting ResultStage 2 (MapPartitionsRDD[14] at take at <ipython-input-7-745486715568>:1), which has no missing parents
17/10/05 05:13:03 INFO storage.MemoryStore: Block broadcast_7 stored as values in memory (estimated size 4.3 KB, free 1170.2 KB)
17/10/05 05:13:03 INFO storage.MemoryStore: Block broadcast_7_piece0 stored as bytes in memory (estimated size 2.5 KB, free 1172.6 KB)
17/10/05 05:13:03 INFO storage.BlockManagerInfo: Added broadcast_7_piece0 in memory on localhost:55073 (size: 2.5 KB, free: 208.7 MB)
17/10/05 05:13:03 INFO spark.SparkContext: Created broadcast 7 from broadcast at DAGScheduler.scala:1006
17/10/05 05:13:03 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (MapPartitionsRDD[14] at take at <ipython-input-7-745486715568>:1)
17/10/05 05:13:03 INFO scheduler.TaskSchedulerImpl: Adding task set 2.0 with 1 tasks
17/10/05 05:13:03 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 2.0 (TID 2, localhost, partition 0,PROCESS_LOCAL, 2149 bytes)
17/10/05 05:13:03 INFO executor.Executor: Running task 0.0 in stage 2.0 (TID 2)
17/10/05 05:13:03 INFO rdd.HadoopRDD: Input split: hdfs://localhost:8020/user/training/people.json:0+179
17/10/05 05:13:03 INFO codegen.GenerateUnsafeProjection: Code generated in 113.719806 ms
17/10/05 05:13:03 INFO executor.Executor: Finished task 0.0 in stage 2.0 (TID 2). 2235 bytes result sent to driver
17/10/05 05:13:03 INFO scheduler.DAGScheduler: ResultStage 2 (take at <ipython-input-7-745486715568>:1) finished in 0.493 s
17/10/05 05:13:03 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 2.0 (TID 2) in 487 ms on localhost (1/1)
17/10/05 05:13:03 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 2.0, whose tasks have all completed, from pool
17/10/05 05:13:03 INFO scheduler.DAGScheduler: Job 2 finished: take at <ipython-input-7-745486715568>:1, took 0.737231 s
Out[7]: [Row(age=None), Row(age=30), Row(age=19)]
In [8]:
以上是关于java的怎么操作spark的dataframe的主要内容,如果未能解决你的问题,请参考以下文章
学习笔记Spark—— Spark SQL应用—— Spark DataFrame基础操作