Bigquery 表通过 Python 创建和加载数据

Posted

技术标签:

【中文标题】Bigquery 表通过 Python 创建和加载数据【英文标题】:Big Query table create and load data via Python 【发布时间】:2018-11-07 00:14:16 【问题描述】:

我正在尝试从 MixPanel 中提取事件,对其进行处理,然后上传到 BigQuery 表(创建一个新表)。

我搜索了所有可用资源,但对解决问题没有用。

Below is my code,
# Required modules import
import os
from mixpanel_api import Mixpanel
import collections
import json
from google.cloud import storage, bigquery
# Function to flatten exported file
def flatten(d, parent_key='', sep=''):
    items = []
    for k, v in d.items():
        new_key = parent_key.replace("PROPERTIES","").replace("-","_").replace("[","").replace("]","").replace("/","").replace("\\","").replace("'","") + sep + k.replace(" ","").replace("_","").replace("$","").replace("-","_").replace("[","").replace("]","").replace("/","").replace("\\","").replace("'","") if parent_key else k
        #new_key = parent_key.replace("PROPERTIES","").join(e for e in parent_key if e.isalnum()) + sep + k.join(e for e in k if e.isalnum()) if parent_key else k
        if isinstance(v, collections.MutableMapping):
            items.extend(flatten(v, new_key.upper(), sep=sep).items())
        else:
            items.append((new_key.upper().replace("-","_").replace("[","").replace("]","").replace("/","").replace("\\","").replace("'",""), v))
            #items.append((new_key.upper().join(e for e in new_key if e.isalnum()), v))
            #items.append(("ID","1"))
            #items.append(("PROCESS_DATE",""))
            #items.append(("DATA_DATE",""))
    return dict(items)
# Start of execution point
if __name__ == '__main__':

   # Secret and token to access API
   api_sec = 'aa8af6b5ca5a5ed30e20f3af0acdfb2d'
   api_tok = 'ad5234953e64b908bcd35388875324db'
   # User input for date range and filename
   start_date = str(input('Enter the start date(format: YYYY-MM-DD): '))
   end_date = str(input('Enter the end date(format: YYYY-MM-DD): '))
   file_name = str(input('Enter filename to store output: '))
   file_formatter = str(input('Enter filename to store formatted output: '))
   # Instantiating Mixpanel object
   mpo = Mixpanel(api_sec,
           api_tok
         )
   # Exporting events for the specified date range and storing in the filename provided, gunzip'ed file
   mpo.export_events(file_name,
           'from_date':start_date,
            'to_date':end_date
           ,
           add_gzip_header=False,
           raw_stream=True
        )

   # Dict for schema derived from file
   schema_dict = 
   # Flatten file and write-out to another file
   with open(file_name, 'r') as uf, open(file_formatter, 'a') as ff, open('schema_file', 'a') as sf:
       #schema_list = []
       for line in uf:
           temp = flatten(json.loads(line))
           for k in temp.keys():
              if k not in schema_dict:
                   schema_dict[k] = "STRING"
                   #schema_list.append("name" : k, "type" : "STRING")
            #ff.write(json.dumps(temp))
           json.dump(temp, ff, indent = None, sort_keys = True)                # Dumps each dictionary entry as a newline entry, even '' '' is on new lines
           ff.write('\n')                     # Adds a new line after each object dump to file
       #json.dump(schema_dict, sf, indent = None, sort_keys = True)
       #json.dump(schema_list, sf, indent = None, sort_keys = True)

   # Removing source file
   if os.path.isfile(file_name):
       sfr = os.remove(file_name)
       if sfr == None:
           print 'File ' +file_name+ ' removed from local storage'
       else:
           print 'File ' +file_name+ ' remove failed from local storage'
   # Uploading file to Google bucket
   client = storage.Client()
   bucket = client.get_bucket('yathin-sample-bucket')
   blob = bucket.blob(file_formatter)
   status = blob.upload_from_filename(file_formatter)
   if status == None:
       print 'File ' +file_formatter+ ' upload success. Removing local copy.'
       fr = os.remove(file_formatter)
       if fr == None:
           print 'File ' +file_formatter+ ' removed from local storage'
       else:
           print 'File ' +file_formatter+ ' remove failed from local storage'
   # Loading file to BigQuery
   client = bigquery.Client()
   dataset_id = 'sample_dataset'
   dataset_ref = client.dataset(dataset_id)
   job_config = bigquery.LoadJobConfig()
   job_config.schema = [ bigquery.SchemaField(k,v) for k,v in schema_dict.items() ]
   #job_config.autodetect = True
   #job_config.create_dsiposition = 'CREATE_IF_NEEDED'
   #job_config.write_disposition = 'WRITE_APPEND'
   job_config.source_format = 'NEWLINE_DELIMITED_JSON'
   uri = 'gs://yathin-sample-bucket/'+file_formatter
   load_job = client.load_table_from_uri(
              uri,
              dataset_ref.table('test_json'),
              job_config=job_config)  # API request
   #assert load_job.job_type == 'load'
   #load_job.result()  # Waits for table load to complete.

此代码未返回任何错误,但未创建表。

有人可以帮忙指出问题所在吗?

【问题讨论】:

尝试bq ls -j --all 列出当前和过去的工作。看起来您正在启动作业,但没有等待它完成或检查结果。 运行load_job.result()后,load_job.errors中是否有错误信息? @ElliottBrossard 谢谢,解析 JSON 文件时出错。我能够修复,现在它工作正常:) 那很好。如果您更改了代码以便能够检测到错误,请考虑将其发布为答案,以供未来读者使用。 @ElliottBrossard 我将在代码完全完成后发布代码。还有几件事,如何检查表是否已经存在?如何使用 jobid 提交上述加载作业并跟踪它是成功还是失败?任何 Python 代码示例(如果有)都会对我有很大帮助。 【参考方案1】:

可能存在错误,但您没有在脚本中返回结果。我不知道你为什么注释掉load_job.result(),但这可能是确保工作完成所必需的。

如果仍然没有错误,此脚本可以为您提供最近作业的列表以及带有任何错误代码的结果。只需更改max_results kwarg。

client = biquery.Client()
for job in client.list_jobs(max_results=1, all_users=False):
        jobid = job.job_id
        job = client.get_job(jobid)
        print("------BIG QUERY JOB ERROR REASON", job.errors)

另外,根据您在 cmets 中关于如何检查表是否存在的问题...

from google.cloud.exceptions import NotFound
client = bigquery.Client()
    try:
        dataset = client.dataset('DatasetName')
        table_ref = dataset.table('TableName')
        client.get_table(table_ref)
    except: NotFound:
        print('Table Not Found')

【讨论】:

以上是关于Bigquery 表通过 Python 创建和加载数据的主要内容,如果未能解决你的问题,请参考以下文章

从 python 生成 Faker 数据并将其加载到 BigQuery 嵌套表中

通过 GAS 创建 BIGQUERY 表

使用 Python API 获取 BigQuery 临时表“目标表”

通过 bigquery-python 库向 BigQuery 插入大量数据

使用python从bigquery处理大量数据集,将其加载回bigquery表

使用 Python 加载表时,BigQuery 不会跳过 CSV 的标题行