如何在 AWS 中使用 Glue 作业覆盖 s3 数据
Posted
技术标签:
【中文标题】如何在 AWS 中使用 Glue 作业覆盖 s3 数据【英文标题】:How to override s3 data using Glue job in AWS 【发布时间】:2020-05-23 09:04:34 【问题描述】:我有 dynamo db 表,我正在使用胶水作业将 dynamo db 数据发送到 s3。每当运行胶水作业以将新数据更新到 s3 时,它也会附加旧数据。它应该覆盖下面的旧 data.Job 脚本
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
## @type: DataSource
## @args: [database = "abc", table_name = "xyz", transformation_ctx = "datasource0"]
## @return: datasource0
## @inputs: []
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = "abc", table_name = "xyz", transformation_ctx = "datasource0")
## @type: ApplyMapping
## @args: [mapping = [("address", "string", "address", "string"), ("name", "string", "name", "string"), ("company", "string", "company", "string"), ("id", "string", "id", "string")], transformation_ctx = "applymapping1"]
## @return: applymapping1
## @inputs: [frame = datasource0]
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [("address", "string", "address", "string"), ("name", "string", "name", "string"), ("company", "string", "company", "string"), ("id", "string", "id", "string")], transformation_ctx = "applymapping1")
## @type: ResolveChoice
## @args: [choice = "make_struct", transformation_ctx = "resolvechoice2"]
## @return: resolvechoice2
## @inputs: [frame = applymapping1]
resolvechoice2 = ResolveChoice.apply(frame = applymapping1, choice = "make_struct", transformation_ctx = "resolvechoice2")
## @type: DropNullFields
## @args: [transformation_ctx = "dropnullfields3"]
## @return: dropnullfields3
## @inputs: [frame = resolvechoice2]
dropnullfields3 = DropNullFields.apply(frame = resolvechoice2, transformation_ctx = "dropnullfields3")
## @type: DataSink
## @args: [connection_type = "s3", connection_options = "path": "s3://xyztable", format = "parquet", transformation_ctx = "datasink4"]
## @return: datasink4
## @inputs: [frame = dropnullfields3]
datasink4 = glueContext.write_dynamic_frame.from_options(frame = dropnullfields3, connection_type = "s3", connection_options = "path": "s3://xyztable", format = "parquet", transformation_ctx = "datasink4")
job.commit()
【问题讨论】:
你能分享代码吗,这将有助于更好的建议。 我已经上传了脚本 我得到(解析纱线日志得到错误消息:IllegalArgumentException: 'Can not create a Path from an empty string' Tracebackmost recent call last)这个错误。 你经过的路径是什么 s3 存储桶路径 df.write.mode('overwrite').parquet('s3://xyztable') 【参考方案1】:用这个替换你的倒数第二行
df = dropnullfields3.toDF()
df.write.mode('overwrite').parquet('s3://xyzPath')
它会在您每次运行作业时替换文件夹,因为胶水库目前不支持模式,所以我们在这里使用 pyspark 库。
【讨论】:
【参考方案2】:如果您尝试覆盖 s3 中的数据,DynamicFrame 目前无法更改为保存模式,但您可以更改 toDF()
并使用共享的方法 here
【讨论】:
以上是关于如何在 AWS 中使用 Glue 作业覆盖 s3 数据的主要内容,如果未能解决你的问题,请参考以下文章
从 AWS Redshift 到 S3 的 AWS Glue ETL 作业失败