Spark Structured Streaming框架之进程管理

Posted 修能

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Spark Structured Streaming框架之进程管理相关的知识,希望对你有一定的参考价值。

  Structured Streaming提供一些API来管理Streaming对象。用户可以通过这些API来手动管理已经启动的Streaming,保证在系统中的Streaming有序执行。

1. StreamingQuery

 

  在调用DataStreamWriter方法的start启动Streaming后,会返回一个StreamingQuery对象。所以用户就可以通过这个对象来管理Streaming。

如下所示:

val query = df.writeStream.format("console").start() // get the query object

 

query.id // get the unique identifier of the running query that persists across restarts from checkpoint data

 

query.runId // get the unique id of this run of the query, which will be generated at every start/restart

 

query.name // get the name of the auto-generated or user-specified name

 

query.explain() // print detailed explanations of the query

 

query.stop() // stop the query

 

query.awaitTermination() // block until query is terminated, with stop() or with error

 

query.exception // the exception if the query has been terminated with error

 

query.recentProgress // an array of the most recent progress updates for this query

 

query.lastProgress // the most recent progress update of this streaming query

 

2. StreamingQueryManager

 

  Structured Streaming提供了另外一个管理Streaming的接口是:StreamingQueryManager。用户可以通过SparkSession对象的streams方法获得。

如下所示:

val spark: SparkSession = ...

val streamManager = spark.streams()

streamManager.active // get the list of currently active streaming queries

 

streamManager.get(id) // get a query object by its unique id

 

streamManager.awaitAnyTermination() // block until any one of them terminates

 

3. 参考文献

 

[2]. Kafka Integration Guide.

 

以上是关于Spark Structured Streaming框架之进程管理的主要内容,如果未能解决你的问题,请参考以下文章

Spark Structured Streaming

Spark Structured Streaming

Spark Structured Streaming - 1

删除由 spark-structured-streaming 写入的损坏的 parquet 文件时,我会丢失数据吗?

无法使用Spark Structured Streaming在Parquet文件中写入数据

如何使用Spark Structured Streaming连续监视目录