Pandas 在处理 Excel 文件时花费的时间太长且占用的内存太多
Posted
技术标签:
【中文标题】Pandas 在处理 Excel 文件时花费的时间太长且占用的内存太多【英文标题】:Pandas taking too long and consuming too much memory when working with excel file 【发布时间】:2018-02-15 09:02:52 【问题描述】:我正在尝试处理少于 50k 行的 Excel 工作表。我想要做的是 - 使用特定列,我想获取所有唯一值,然后通过使用唯一值,我想获取包含该值的所有行,并将它们放在这种格式中
[
"unique_field_value": [Array containing row data that match the unique value as dictionaries]
,]
问题是,当我用更少的行(如 1000 行)进行测试时,一切顺利。随着数量的增长,内存使用量也会增加,直到它不能再增长并且我的电脑死机了。那么,熊猫有什么不对劲的地方吗? .这是我平台的详细信息:
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS"
NAME="Ubuntu"
VERSION="16.04.3 LTS (Xenial Xerus)"
ID_LIKE=debian
VERSION_ID="16.04"
这是我在 Jupyter Notebook 上运行的代码
import pandas as pd
import simplejson
import datetime
def datetime_handler(x):
if isinstance(x, datetime.datetime):
return x.isoformat()
raise TypeError("Type not Known")
path = "/home/misachi/Downloads/new members/my_file.xls"
df = pd.read_excel(path, index_col=None, skiprows=[0])
df = df.dropna(thresh=5)
df2 = df.drop_duplicates(subset=['corporate'])
schemes = df2['corporate'].values
result_list = []
result_dict =
for count, name in enumerate(schemes):
inner_dict =
col_val = schemes[count]
foo = df['corporate'] == col_val
data = df[foo].to_json(orient='records', date_format='iso')
result_dict[name] = simplejson.loads(data)
result_list.append(result_dict)
# print(result_list)
# if count == 3:
# break
dumped = simplejson.dumps(result_list, ignore_nan=True, default=datetime_handler)
with open('/home/misachi/Downloads/new members/members/folder/insurance.json', 'w') as json_f:
json_f.write(dumped)
编辑
这是预期的示例输出
[
"TABBY MEMORIAL CATHEDRAL": [
"corp_id": 8494,
"smart": null,
"copay": null,
"corporate": "TABBY MEMORIAL CATHEDRAL",
"category": "CAT A",
"member_names": "Brian Maombi",
"member_no": "84984",
"start_date": "2017-03-01T00:00:00.000Z",
"end_date": "2018-02-28T00:00:00.000Z",
"outpatient": "OUTPATIENT"
,
"corp_id": 8494,
"smart": null,
"copay": null,
"corporate": "TABBY MEMORIAL CATHEDRAL",
"category": "CAT A",
"member_names": "Omula Peter",
"member_no": "4784984",
"start_date": "2017-03-01T00:00:00.000Z",
"end_date": "2018-02-28T00:00:00.000Z",
"outpatient": "OUTPATIENT"
],
"CHECKIFY KENYA LTD": [
"corp_id": 7489,
"smart": "SMART",
"copay": null,
"corporate": "CHECKIFY KENYA LTD",
"category": "CAT A",
"member_names": "BENARD KONYI",
"member_no": "ABB/8439",
"start_date": "2017-08-01T00:00:00.000Z",
"end_date": "2018-07-31T00:00:00.000Z",
"outpatient": "OUTPATIENT"
,
"corp_id": 7489,
"smart": "SMART",
"copay": null,
"corporate": "CHECKIFY KENYA LTD",
"category": "CAT A",
"member_names": "KEVIN WACHAI",
"member_no": "ABB/67484",
"start_date": "2017-08-01T00:00:00.000Z",
"end_date": "2018-07-31T00:00:00.000Z",
"outpatient": "OUTPATIENT"
]
]
完整简洁的代码是:
import os
import pandas as pd
import simplejson
import datetime
def datetime_handler(x):
if isinstance(x, datetime.datetime):
return x.isoformat()
raise TypeError("Unknown type")
def work_on_data(filename):
if not os.path.isfile(filename):
raise IOError
df = pd.read_excel(filename, index_col=None, skiprows=[0])
df = df.dropna(thresh=5)
result_list = [n: g.to_dict('records') for n, g in df.groupby('corporate')]
dumped = simplejson.dumps(result_list, ignore_nan=True, default=datetime_handler)
return dumped
dumped = work_on_data('/home/misachi/Downloads/new members/my_file.xls')
with open('/home/misachi/Downloads/new members/members/folder/insurance.json', 'w') as json_f:
json_f.write(dumped)
【问题讨论】:
【参考方案1】:获取字典
result_dict = [n: g.to_dict('records') for n, g in df.groupby('corporate')]
【讨论】:
这行得通,它更快、更高效、更清洁。但它不会返回指定格式的数据,即 [key: val] 其中 key 是唯一字段名称,val 是包含具有相同唯一字段值的所有行的数据的字典列表。 我建议您制作一个实际示例并显示所需的输出,这样我就不必猜测您正在尝试做什么。 根据您的编辑,您有一个长度为一个列表,其第一个也是唯一一个元素等于我给您的同一个字典。我所做的唯一修改是将之前的解决方案放在方括号内。 你能知道我的初始代码为什么这么慢吗? 您循环遍历每个唯一值并每次拆分数据帧。此外,在每次迭代中,您转储到 json 只是为了再次读取它。【参考方案2】:用read_excel() 指定chunksize=10000
参数并循环遍历文件,直到到达数据末尾。这将帮助您在处理大文件时管理内存。如果您要管理多张工作表,请关注this example
for chunk in pd.read_excel(path, index_col=None, skiprows=[0] chunksize=10000):
df = chunk.dropna(thresh=5)
df2 = df.drop_duplicates(subset=['corporate'])
# rest of your code
【讨论】:
指定块大小会引发 NotImplementedError。文档也不包括它。以上是关于Pandas 在处理 Excel 文件时花费的时间太长且占用的内存太多的主要内容,如果未能解决你的问题,请参考以下文章