使用 Java 写入数据库时​​ Apache Beam 管道中的异常处理

Posted

技术标签:

【中文标题】使用 Java 写入数据库时​​ Apache Beam 管道中的异常处理【英文标题】:Exception Handling in Apache Beam pipelines when writing to database using Java 【发布时间】:2019-05-31 16:04:37 【问题描述】:

在管道末端将简单记录写入 Postgres 中的表(可以是任何数据库)时,一些潜在记录违反唯一性约束并触发异常。据我所知,没有直接的方法可以优雅地处理这些问题 - 管道要么完全出错,要么根据运行者的不同进入无休止的死亡螺旋。

在docs 中似乎没有提到针对这种情况的错误处理。关于错误处理的媒体posts 似乎不适用于这种返回 PDone 的特定类型的 PTransform。

This 的答案难以理解,而且缺乏示例。

在我的示例中,我正在读取包含 2 行重复行的文件并尝试将它们写入表中。

CREATE TABLE foo (
    field CHARACTER VARYING(100) UNIQUE
);

foo.txt 包含:

a
a

管道如下所示:

        Pipeline p = Pipeline.create();
        p.apply(TextIO.read().from("/path/to/foo.txt"))
        .apply(
                JdbcIO.<String>write()
                .withDataSourceConfiguration(JdbcIO.DataSourceConfiguration.create("org.postgresql.Driver", "jdbc:postgresql://localhost:5432/somedb"))
                .withStatement("INSERT INTO foo (field) VALUES (?)")
                .withPreparedStatementSetter(new JdbcIO.PreparedStatementSetter<String>() 
                    private static final long serialVersionUID = 1L;
                    public void setParameters(String element, PreparedStatement query) throws SQLException 
                        query.setString(1, element);
                    
                ))
        ;
        p.run();

这是上面简单示例的输出:

[WARNING]
org.apache.beam.sdk.Pipeline$PipelineExecutionException: java.sql.BatchUpdateException: Batch entry 0 INSERT INTO foo (field) VALUES ('a') was aborted: ERROR: duplicate key value violates unique constraint "foo_field_key"
  Detail: Key (field)=(a) already exists.  Call getNextException to see other errors in the batch.
    at org.apache.beam.runners.direct.DirectRunner$DirectPipelineResult.waitUntilFinish (DirectRunner.java:332)
    at org.apache.beam.runners.direct.DirectRunner$DirectPipelineResult.waitUntilFinish (DirectRunner.java:302)
    at org.apache.beam.runners.direct.DirectRunner.run (DirectRunner.java:197)
    at org.apache.beam.runners.direct.DirectRunner.run (DirectRunner.java:64)
    at org.apache.beam.sdk.Pipeline.run (Pipeline.java:313)
    at org.apache.beam.sdk.Pipeline.run (Pipeline.java:299)
    at com.thing.Main.main (Main.java:105)
    at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke (Method.java:498)
    at org.codehaus.mojo.exec.ExecJavaMojo$1.run (ExecJavaMojo.java:282)
    at java.lang.Thread.run (Thread.java:748)
Caused by: java.sql.BatchUpdateException: Batch entry 0 INSERT INTO foo (field) VALUES ('a') was aborted: ERROR: duplicate key value violates unique constraint "foo_field_key"
  Detail: Key (field)=(a) already exists.  Call getNextException to see other errors in the batch.
    at org.postgresql.jdbc.BatchResultHandler.handleError (BatchResultHandler.java:148)
    at org.postgresql.core.ResultHandlerDelegate.handleError (ResultHandlerDelegate.java:50)
    at org.postgresql.core.v3.QueryExecutorImpl.processResults (QueryExecutorImpl.java:2184)
    at org.postgresql.core.v3.QueryExecutorImpl.execute (QueryExecutorImpl.java:481)
    at org.postgresql.jdbc.PgStatement.executeBatch (PgStatement.java:840)
    at org.postgresql.jdbc.PgPreparedStatement.executeBatch (PgPreparedStatement.java:1538)
    at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
    at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
    at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
    at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
    at org.apache.beam.sdk.io.jdbc.JdbcIO$Write$WriteFn.executeBatch (JdbcIO.java:846)
    at org.apache.beam.sdk.io.jdbc.JdbcIO$Write$WriteFn.finishBundle (JdbcIO.java:819)
Caused by: org.postgresql.util.PSQLException: ERROR: duplicate key value violates unique constraint "foo_field_key"
  Detail: Key (field)=(a) already exists.
    at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse (QueryExecutorImpl.java:2440)
    at org.postgresql.core.v3.QueryExecutorImpl.processResults (QueryExecutorImpl.java:2183)
    at org.postgresql.core.v3.QueryExecutorImpl.execute (QueryExecutorImpl.java:481)
    at org.postgresql.jdbc.PgStatement.executeBatch (PgStatement.java:840)
    at org.postgresql.jdbc.PgPreparedStatement.executeBatch (PgPreparedStatement.java:1538)
    at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
    at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
    at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
    at org.apache.commons.dbcp2.DelegatingStatement.executeBatch (DelegatingStatement.java:345)
    at org.apache.beam.sdk.io.jdbc.JdbcIO$Write$WriteFn.executeBatch (JdbcIO.java:846)
    at org.apache.beam.sdk.io.jdbc.JdbcIO$Write$WriteFn.finishBundle (JdbcIO.java:819)
    at org.apache.beam.sdk.io.jdbc.JdbcIO$Write$WriteFn$DoFnInvoker.invokeFinishBundle (Unknown Source)
    at org.apache.beam.repackaged.beam_runners_direct_java.runners.core.SimpleDoFnRunner.finishBundle (SimpleDoFnRunner.java:285)
    at org.apache.beam.repackaged.beam_runners_direct_java.runners.core.SimplePushbackSideInputDoFnRunner.finishBundle (SimplePushbackSideInputDoFnRunner.java:118)
    at org.apache.beam.runners.direct.ParDoEvaluator.finishBundle (ParDoEvaluator.java:223)
    at org.apache.beam.runners.direct.DoFnLifecycleManagerRemovingTransformEvaluator.finishBundle (DoFnLifecycleManagerRemovingTransformEvaluator.java:73)
    at org.apache.beam.runners.direct.DirectTransformExecutor.finishBundle (DirectTransformExecutor.java:188)
    at org.apache.beam.runners.direct.DirectTransformExecutor.run (DirectTransformExecutor.java:126)
    at java.util.concurrent.Executors$RunnableAdapter.call (Executors.java:511)
    at java.util.concurrent.FutureTask.run (FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker (ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run (ThreadPoolExecutor.java:624)
    at java.lang.Thread.run (Thread.java:748)

我希望能够阻止该异常并将其转移到一些死信构造。

【问题讨论】:

【参考方案1】:

在 Beam 中还没有通用的方法。不时有关于修改 IO 以不返回 PDone 的讨论,但据我所知,没有什么现成的。

目前我能想到几个解决方法,但都远非理想:

在驱动程序中处理管道失败时的重启; 复制粘贴 JdbcIO 及其部分内容,或使用自定义异常处理实现您自己的 Jdbc ParDo; 为 JdbcIO 添加异常处理功能并将其贡献给 Beam,将不胜感激;

【讨论】:

【参考方案2】:

我也面临同样的问题。因此,我创建了自定义 jdbcio 写入并返回 PCollectionTuple 而不是 PDone,其中我分类成功插入的记录和其他在 WriteFn 中执行批处理时引发 sqlexception 的记录。

以下是更多详细信息的链接: https://sachin4java.blogspot.com/2021/11/extract-error-records-while-inserting.html

【讨论】:

以上是关于使用 Java 写入数据库时​​ Apache Beam 管道中的异常处理的主要内容,如果未能解决你的问题,请参考以下文章

在 Apache Spark 中,用 Java 将数据帧写入 Hive 表

JAVA - 编写Excel文件时的Apache POI OutOfMemoryError

尝试在 Dataflow 中使用 Apache Beam 将数据从 Google PubSub 写入 GCS 时出错

Java架构-Apache POI Excel

使用Lucene的java api 写入和读取索引库

java 写入数据到Excel文件中_Demo