使用 403 写入 S3 时,在 EMR 上运行的 Spark 偶尔会失败
Posted
技术标签:
【中文标题】使用 403 写入 S3 时,在 EMR 上运行的 Spark 偶尔会失败【英文标题】:Spark running on EMR sporadically fails when writing to S3 with 403 【发布时间】:2020-04-02 17:52:33 【问题描述】:我有一个在 AWS EMR 上运行的 spark 作业。该作业进行一些数据处理并将数据帧作为 csv 写入 s3。
作业偶尔失败,同时写入 s3 并抛出 403。我几乎可以肯定这不是权限问题,因为作业几乎 70% 的时间成功完成并将输出写入 s3 没有问题,但我时不时地得到这个
Traceback (most recent call last):
File "/home/sparkProcessor/controller.py", line 58, in <module>
processor.process()
File "/home/sparkProcessor/processor.py", line 32, in process
self.write_outputs(df1, df2, df3)
File "/home/sparkProcessor/processor.py", line 168, in write_outputs
df1.coalesce(1).write.csv(self.configurations['OUTPUT_DIRECTORY'] + "/" + dir_name+ "/output", mode="overwrite", header="true")
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/readwriter.py", line 931, in csv
File "/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
File "/usr/lib/spark/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling o199.csv.
: org.apache.spark.SparkException: Job aborted.
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:198)
at org.apache.spark.sql.execution.datasources.InsertIntoHadoopFsRelationCommand.run(InsertIntoHadoopFsRelationCommand.scala:159)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult$lzycompute(commands.scala:104)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.sideEffectResult(commands.scala:102)
at org.apache.spark.sql.execution.command.DataWritingCommandExec.doExecute(commands.scala:122)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:156)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)
at org.apache.spark.sql.DataFrameWriter.csv(DataFrameWriter.scala:664)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.file.AccessDeniedException: s3a://sparkOutputBucket/output/_temporary/0/task_20200402173032_0032_m_000000/part-00000-47cc7efd-8dfe-4d5c-8e20-c5b3ab08278b-c000.csv: getFileStatus on s3a://sparkOutputBucket/output/_temporary/0/task_20200402173032_0032_m_000000/part-00000-47cc7efd-8dfe-4d5c-8e20-c5b3ab08278b-c000.csv: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: <request ID>; S3 Extended Request ID: <request ID>), S3 Extended Request ID: <Request ID>
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:158)
at org.apache.hadoop.fs.s3a.S3AUtils.translateException(S3AUtils.java:101)
at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1568)
at org.apache.hadoop.fs.s3a.S3AFileSystem.innerRename(S3AFileSystem.java:707)
at org.apache.hadoop.fs.s3a.S3AFileSystem.rename(S3AFileSystem.java:662)
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:457)
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.mergePaths(FileOutputCommitter.java:471)
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJobInternal(FileOutputCommitter.java:388)
at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.commitJob(FileOutputCommitter.java:360)
at org.apache.hadoop.mapreduce.lib.output.DirectFileOutputCommitter.commitJob(DirectFileOutputCommitter.java:111)
at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob(HadoopMapReduceCommitProtocol.scala:166)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.write(FileFormatWriter.scala:187)
... 33 more
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: Forbidden (Service: Amazon S3; Status Code: 403; Error Code: 403 Forbidden; Request ID: <Request ID>; S3 Extended Request ID: <Request ID>), S3 Extended Request ID: <Request ID>
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1712)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1367)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1113)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:770)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:744)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:726)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:686)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:668)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:532)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:512)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4921)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4867)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1320)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:1294)
at org.apache.hadoop.fs.s3a.S3AFileSystem.getObjectMetadata(S3AFileSystem.java:904)
at org.apache.hadoop.fs.s3a.S3AFileSystem.getFileStatus(S3AFileSystem.java:1553)
... 42 more
我在网上查看是否有人遇到过这个问题,但没有成功。我得到的最接近的是一个线程,他们建议这可能与 s3a 的使用有关,但是没有就如何解决这个问题提出任何建议。
如果有人能提供帮助,我将不胜感激。 谢谢!
P.S 似乎 getFileStatus 权限不满足,还有 70% 的时间没有触发,而且我也没有看到任何 s3 角色的 getFileStatus 权限。
【问题讨论】:
您应该尝试切换到 s3。我相信不建议在 emr 中使用 s3a/n。 是的。似乎切换到 s3 成功了。想知道为什么这是解决方案。 【参考方案1】:https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-plan-file-systems.html
注意 不支持 s3a 协议。我们建议您使用 s3 代替 s3a。
【讨论】:
感谢分享该文档!以上是关于使用 403 写入 S3 时,在 EMR 上运行的 Spark 偶尔会失败的主要内容,如果未能解决你的问题,请参考以下文章
通过 EMR 写入 s3a 时出现 OutOfMemory 错误