读取较大的csv文件,然后将其拆分导致OOM错误
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了读取较大的csv文件,然后将其拆分导致OOM错误相关的知识,希望对你有一定的参考价值。
[嗨,我正在创建GLUE作业,该作业将读取csv文件,然后通过特定的列将其拆分,很遗憾,这会导致OOM(Out of Memory)
错误。请参见下面的代码
import pandas as pd
import numpy as np
from datetime import datetime, timedelta
import boto3
#get date
Current_Date = datetime.now() - timedelta(days=1)
now = Current_Date.strftime('%Y-%m-%d')
#get date
Previous_Date = datetime.now() - timedelta(days=2)
prev = Previous_Date.strftime('%Y-%m-%d')
#read csv file that contain today's date
filepath = "s3://bucket/file"+now+".csv.gz"
data = pd.read_csv(filepath, sep='|', header=None,compression='gzip')
# count no. of loops
loop = 0
for i, x in data.groupby(data[10].str.slice(0,10)):
loop += 1
# if no. of distinct values of column 10 (last_update) is greater than or equal to 7
if loop >= 7:
#run loop for the dataframe and split by distinct values of column 10 (last_update)
for i, x in data.groupby(data[10].str.slice(0, 10)):
x.to_csv("s3://bucket/file.csv.gz".format(i.lower()),header=None,compression='gzip')
#if no. of distinct values of column 10 (last_update) is less than 7
#filter dateframe (current date and previous date); new dataframe is created
else:
d = data[(data[10].str.slice(0,10)==prev)|(data[10].str.slice(0,10)==now)]
#run loop for the filtered data frame and split by distinct values of column 10 (last_update)
for i, x in d.groupby(d[10].str.slice(0, 10)):
x.to_csv("s3://bucket/file.csv.gz".format(i.lower()),header=None,compression='gzip')
解决方案-我通过增加胶水作业的最大容量解决了这个问题
答案
[不确定文件的大小,但是如果按块分割文件,应该可以避免该错误。我们已经使用此方法成功测试了2.5gb文件。另外,如果您使用的是Python Shell,请记住将胶水作业的最大容量更新为1
data = pd.read_csv(filepath, chunksize=1000, iterator=True)
for chunk in enumerate(data):
#Loop through the chunks and process the data
以上是关于读取较大的csv文件,然后将其拆分导致OOM错误的主要内容,如果未能解决你的问题,请参考以下文章