使用 Dataflow 管道 (python) 将多个 Json zip 文件从 GCS 加载到 BigQuery
Posted
技术标签:
【中文标题】使用 Dataflow 管道 (python) 将多个 Json zip 文件从 GCS 加载到 BigQuery【英文标题】:Load multiple Json zip file from GCS to BigQuery using Dataflow pipeline (python) 【发布时间】:2021-02-16 18:38:02 【问题描述】:我对 Dataflow 和天真的程序员完全陌生。我正在寻求帮助设计一个用 python 编写的数据流管道,以读取存储在 GCS 上的多部分压缩 Json 文件以加载到 BigQuery。 源无法为我们提供文件/表的架构。所以,我正在寻找一个自动检测选项。如下所示:
job_config = bigquery.LoadJobConfig(
autodetect=True,
source_format=bigquery.SourceFormat.NEWLINE_DELIMITED_JSON
)
我不需要任何转换。只是想将json加载到BQ。
我在 google 上找不到任何示例模板,该模板可以读取具有自动检测功能的 json.zip 文件并写入 BQ。有人可以帮助我提供上述要求的模板或语法或我需要考虑的提示和要点吗?
【问题讨论】:
【参考方案1】:Beam Python 的fileio 转换具有读取压缩 JSON 所需的功能。您可以指定压缩类型和文件后缀。 File Processing 上的教程也会有所帮助。
【讨论】:
谢谢肯恩。【参考方案2】:这是一个示例 Python Beam 可执行代码和示例原始数据。
#------------Import Lib-----------------------#
import apache_beam as beam
from apache_beam import window
from apache_beam.options.pipeline_options import PipelineOptions, StandardOptions
import os, sys, time
import argparse
import logging
from apache_beam.options.pipeline_options import SetupOptions
from datetime import datetime
#------------Set up BQ parameters-----------------------#
# Replace with Project Id
project = 'xxxxxxxxxxx'
input='gs://FILE-Path'
#plitting Of Records----------------------#
class Transaction_ECOM(beam.DoFn):
def process(self, element):
logging.info(element)
result = json.loads(element)
data_bkt = result.get('_bkt','null')
data_cd=result.get('_cd','null')
data_indextime=result.get('_indextime','0')
data_kv=result.get('_kv','null')
data_raw=result['_raw']
data_raw1=data_raw.replace("\n", "")
data_serial=result.get('_serial','null')
data_si = str(result.get('_si','null'))
data_sourcetype =result.get('_sourcetype','null')
data_subsecond = result.get('_subsecond','null')
data_time=result.get('_time','null')
data_host=result.get('host','null')
data_index=result.get('index','null')
data_linecount=result.get('linecount','null')
data_source=result.get('source','null')
data_sourcetype1=result.get('sourcetype','null')
data_splunk_server=result.get('splunk_server','null')
return ["datetime_indextime": time.strftime('%Y-%m-%dT%H:%M:%S', time.localtime(int(data_indextime))), "_bkt": data_bkt, "_cd": data_cd, "_indextime": data_indextime, "_kv": data_kv, "_raw": data_raw1, "_serial": data_serial, "_si": data_si, "_sourcetype": data_sourcetype, "_subsecond": data_subsecond, "_time": data_time, "host": data_host, "index": data_index, "linecount": data_linecount, "source": data_source, "sourcetype": data_sourcetype1, "splunk_server": data_splunk_server]
def run(argv=None, save_main_session=True):
parser = argparse.ArgumentParser()
known_args, pipeline_args = parser.parse_known_args(argv)
pipeline_options = PipelineOptions(pipeline_args)
pipeline_options.view_as(SetupOptions).save_main_session = save_main_session
p1 = beam.Pipeline(options=pipeline_options)
data_loading = (
p1
|'Read from File' >> beam.io.ReadFromText(input,skip_header_lines=0)
)
project_id = "xxxxxxxxxxx"
dataset_id = 'test123'
table_schema_ECOM = ('datetime_indextime:DATETIME, _bkt:STRING, _cd:STRING, _indextime:STRING, _kv:STRING, _raw:STRING, _serial:STRING, _si:STRING, _sourcetype:STRING, _subsecond:STRING, _time:STRING, host:STRING, index:STRING, linecount:STRING, source:STRING, sourcetype:STRING, splunk_server:STRING')
# Persist to BigQuery
# WriteToBigQuery accepts the data as list of JSON objects
#---------------------Index = ITF----------------------------------------------------------------------------------------------------------------------
result = (
data_loading
| 'Clean-ITF' >> beam.ParDo(Transaction_ECOM())
| 'Write-ITF' >> beam.io.WriteToBigQuery(
table='CFF_ABC',
dataset=dataset_id,
project=project_id,
schema=table_schema_ECOM,
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND
))
result = p1.run()
result.wait_until_finish()
if __name__ == '__main__':
path_service_account = '/home/vibhg/Splunk/CFF/xxxxxxxxxxx-abcder125.json'
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = path_service_account
run()
它有几个额外的库,所以忽略它。
可以存储在 GCS 上的示例数据,如下所示:-
"_bkt": "A1E8-A5370FECA146", "_cd": "412:140787687", "_indextime": "1611584940", "_kv": "1", "_raw": "2021-01-25 14:28:59,126 INFO [com.abcd.mfs.builder.builders.BsLogEntryBuilder] [-] LogEntryType=\"BsCall\", fulName=\"EBCMFSSALES02\", BusinessServiceName=\"BsSalesOrderCreated\", Locality=\"NA\", Success=\"True\", BsExecutionTime=\"00:00:00.005\", OrderNo=\"374941817\", Locality=\"NA\" , [fulName=\"EBCMFSSALES02\"], [bsName=\"BsSalesOrderCreated\"], [userId=\"s-oitp-u-global\"], [userIdRegion=\"NA\"], [msgId=\"aaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbcccc\"], [msgIdSeq=\"2\"], [originator=\"ISOM\"] ", "_serial": "0", "_si": ["9ttr-bfc-gcp-europe-besti1", "itf"], "_sourcetype": "BBClog", "_subsecond": ".126", "_time": "2021-01-25 14:28:59.126 UTC", "host": "shampo-lx4821.abcd.com", "index": "itf", "linecount": "1", "source": "/opt/VRE/WebSphere/lickserv/profiles/appsrv01/logs/na-ebtree02_srv/log4j2.log", "sourcetype": "BBClog", "web_server": "9ttr-bfc-gcp-europe-besti1"
"_bkt": "itf~412~2EE5428B-7CEA-4C49-A1E8-A5370FECA146", "_cd": "412:140787687", "_indextime": "1611584940", "_kv": "1", "_raw": "2021-01-25 14:28:59,126 INFO [com.abcd.mfs.builder.builders.BsLogEntryBuilder] [-] LogEntryType=\"BsCall\", fulName=\"EBCMFSSALES02\", BusinessServiceName=\"BsSalesOrderCreated\", Locality=\"NA\", Success=\"True\", BsExecutionTime=\"00:00:00.005\", OrderNo=\"374941817\", Locality=\"NA\" , [fulName=\"EBCMFSSALES02\"], [bsName=\"BsSalesOrderCreated\"], [userId=\"s-oitp-u-global\"], [userIdRegion=\"NA\"], [msgId=\"aaaaaaaaaaaaaaaaaaabbbbbbbbbbbbbbcccc\"], [msgIdSeq=\"2\"], [originator=\"ISOM\"] ", "_serial": "0", "_si": ["9ttr-bfc-gcp-europe-besti1", "itf"], "_sourcetype": "BBClog", "_subsecond": ".126", "_time": "2021-01-25 14:28:59.126 UTC", "host": "shampo-lx4821.abcd.com", "index": "itf", "linecount": "1", "source": "/opt/VRE/WebSphere/lickserv/profiles/appsrv01/logs/na-ebtree02_srv/log4j2.log", "sourcetype": "BBClog", "web_server": "9ttr-bfc-gcp-europe-besti1"
"_bkt": "9-A1E8-A5370FECA146", "_cd": "412:140787671", "_indextime": "1611584940", "_kv": "1", "_raw": "2021-01-25 14:28:58,659 INFO [com.abcd.mfs.builder.builders.BsLogEntryBuilder] [-] LogEntryType=\"BsCall\", fulName=\"EBCMFSSALES02\", BusinessServiceName=\"BsCreateOrderV2\", BsExecutionTime=\"00:00:01.568\", OrderNo=\"374942155\", CountryCode=\"US\", ClientSystem=\"owfe-webapp\" , [fulName=\"EBCMFSSALES02\"], [bsName=\"BsCreateOrderV2\"], [userId=\"s-salja1-u-irssemal\"], [userIdRegion=\"NA\"], [msgId=\"6652311fece28966\"], [msgIdSeq=\"25\"], [originator=\"SellingApi\"] ", "_serial": "1", "_si": ["9ttr-bfc-gcp-europe-besti1", "itf"], "_sourcetype": "BBClog", "_subsecond": ".659", "_time": "2021-01-25 14:28:58.659 UTC", "host": "shampo-lx4821.abcd.com", "index": "itf", "linecount": "1", "source": "/opt/VRE/WebSphere/lickserv/profiles/appsrv01/logs/na-ebtree02_srv/log4j2.log", "sourcetype": "BBClog", "web_server": "9ttr-bfc-gcp-europe-besti1"
您可以使用以下命令执行脚本:-
python script.py --region europe-west1 --project xxxxxxx --temp_location gs://temp/temp --runner DataflowRunner --job_name name
也许对你有帮助。
【讨论】:
非常感谢 Vibhor 的代码。由于我是全新的,我确实有几个问题。如果您能帮我解决这个问题,那就太好了。如果我的理解有误,请纠正我。在 Process 方法中,您确实拆分了文件,并且在返回时,您在现有列中添加了 2 个时间列。我可以选择自动检测架构吗?而且您还没有指定文件压缩。我需要读取多个 json Zip 文件。如何处理重复记录?你能解释一下代码的最后 4 行吗?再次感谢您的帮助。期待。以上是关于使用 Dataflow 管道 (python) 将多个 Json zip 文件从 GCS 加载到 BigQuery的主要内容,如果未能解决你的问题,请参考以下文章
防止 Apache Beam / Dataflow 流 (python) 管道中的融合以消除管道瓶颈
在 Python 中为 Dataflow 管道使用 WriteToBigquery 时出错。 Unicode 对象没有属性“项目”
如何在 Python 中创建从 Pub/Sub 到 GCS 的数据流管道