xlsxwriter 消耗太多内存并且进程被杀死
Posted
技术标签:
【中文标题】xlsxwriter 消耗太多内存并且进程被杀死【英文标题】:xlsxwriter consuming too much memory and process gets killed 【发布时间】:2017-10-02 14:50:36 【问题描述】:我正在使用 xlsxwriter python 包将数据从 PostgreSQL 数据库导出到 django 项目中的 excel 中。我已经实现了一个 Django 命令来执行此操作,但问题是有超过 400 万条数据记录,写入文件会消耗我所有的 RAM,并且进程被杀死。
日志:
[export_user_data_to_excel]> Generating excel file with:
3913616 Instagram publications
1250156 Instagram hashtags
513124 Twitter publications
127912 Twitter hashtags
Killed
我尝试了一个名为“constant_memory”的参数,但似乎没有什么不同。下面是写excel文件的方法:
def write_to_excel_perf(filename, instagram_publications, instagram_tags, twitter_publications, twitter_tags, instance):
"""
Export the current queryset to an excel file in xlsx format.
Optimized for low memory consumption and better performance
http://xlsxwriter.readthedocs.io/working_with_memory.html#memory-perf
"""
logger.info("[write_to_excel_perf]> Openning Workbook..")
book = xlsxwriter.Workbook(filename, 'constant_memory': True)
if 'instagram' in instance:
logger.info("[write_to_excel_perf]> Writting Instagram publications..")
sheet = book.add_worksheet('Instagram Media')
# Adding media page
titles = ["Type", "City", "Date", "Instagram Id", "Instagram URL", "caption", "likes",
"author", "location id", "location name", "lat", "lng"]
i = 0
for title in titles:
sheet.write(0, i, title)
i += 1
row_index = 1
# We improve the performance making sure that we query by related data using select_related
# and prefetch_related when needed
instagram_publications = instagram_publications.select_related('location__spot__city', 'author', 'location')
for el in instagram_publications:
# ["Type", "Date", "Instagram Id", "Instagram URL", "caption", "likes", "author", "author_profile",
# "location id", "location name", "lat", "lng"]
mediaType = 'Photo' if el.mediaType == '1' else 'Video'
city = el.location.spot.city.name if el.location is not None and el.location.spot.city is not None else "Undefined"
publication_date = el.publication_date.strftime("%d/%m/%Y %H:%M")
username = el.author.username if el.author is not None else "Undefined"
location_id = el.location.instagramID if el.location is not None else "Undefined"
location_name = el.location.name if el.location is not None else "Undefined"
location_lat = el.location.position.y if el.location is not None else "Undefined"
location_lng = el.location.position.x if el.location is not None else "Undefined"
row = [mediaType, city, publication_date, el.instagramID, el.instagram_url, el.caption, el.likes,
username, location_id, location_name, location_lat,
location_lng]
column_index = 0
for value in row:
sheet.write(row_index, column_index, value)
column_index += 1
row_index += 1
# Adding tag page
sheet = book.add_worksheet('Instagram Tags')
titles = ["Hashtag", "Quantity"]
i = 0
for title in titles:
sheet.write(0, i, title)
i += 1
row_index = 1
if instagram_tags is not None:
logger.info("[write_to_excel_perf]> Writting Instagram hashtags..")
for el in instagram_tags:
hashtag_id = el.get('hashtag__id')
label = Hashtag.objects.get(id=hashtag_id).label
sheet.write(row_index, 0, label)
sheet.write(row_index, 1, el.get('count'))
row_index += 1
else:
sheet.write(1, 0, "No hashtags in query")
if 'twitter' in instance:
# TwitterPublication
logger.info("[write_to_excel_perf]> Writting Twitter publications..")
sheet = book.add_worksheet('Twitter Media')
titles = ["City", "Date", "Twitter Id", "Twitter URL", "caption", "likes",
"author", "lat", "lng"]
i = 0
for title in titles:
sheet.write(0, i, title)
i += 1
row_index = 1
twitter_publications = twitter_publications.select_related('location__spot__city', 'author', 'location')
for el in twitter_publications:
city = el.location.spot.city.name if el.location is not None and el.location.spot.city is not None else "Undefined"
publication_date = el.publication_date.strftime("%d/%m/%Y %H:%M")
username = el.author.username if el.author is not None else "Undefined"
location_lat = el.location.position.y if el.location is not None else "Undefined"
location_lng = el.location.position.x if el.location is not None else "Undefined"
row = [city, publication_date, el.twitterID, el.twitter_url, el.caption, el.likes,
username, location_lat, location_lng]
column_index = 0
for value in row:
sheet.write(row_index, column_index, value)
column_index += 1
row_index += 1
# Adding tag page
sheet = book.add_worksheet('Twitter Tags')
titles = ["Hashtag", "Quantity"]
i = 0
for title in titles:
sheet.write(0, i, title)
i += 1
row_index = 1
if twitter_tags is not None:
logger.info("[write_to_excel_perf]> Writting Twitter hashtags..")
for el in twitter_tags:
hashtag_id = el.get('hashtag__id')
label = TwitterHashtag.objects.get(id=hashtag_id).label
sheet.write(row_index, 0, label)
sheet.write(row_index, 1, el.get('count'))
row_index += 1
else:
sheet.write(1, 0, "No hashtags in query")
book.close()
logger.info("[write_to_excel_perf]> Export file generated sucessfully.")
return book
【问题讨论】:
【参考方案1】:我尝试了一个名为
constant_memory
的参数,但似乎没有什么不同。
应该可以。如XlsxWriter Documentation 所示,constant_memory
选项使内存使用量保持恒定且较小。
因此,如果它对您的应用程序没有影响,那么问题可能不在于 XlsxWriter,而是其他东西正在消耗内存。
您能否通过注释掉对worksheet.write()
的所有调用并再次运行测试来验证这一点。
【讨论】:
谢谢!我认为你是对的,我还在调试它,但我发现主要问题是 select_related() 的使用,它一次将大量相关对象的数据获取到内存中。 我想知道是否有办法在写入每一行后强制进行垃圾回收,因为内存似乎仍在缓慢增加,但无论如何仍在增加。我不知道为什么,但我猜这可能是相关对象的数据。 @Mariano 在constant_memory
模式下,XlsxWriter 仅在内存中保留一行数据,并在每新行中将数据刷新到磁盘。您确定 XlsxWriter 导致内存增加吗?
问题不在于 XlsxWriter,你是对的。问题是我正在使用 for 循环遍历一个巨大的 django 查询集,当我访问每个元素时,也会执行一些其他查询以从其他相关对象获取数据。由于某种原因,直到循环结束,内存才会被释放。迭代大型查询集时,这似乎是一个常见的 python / Django 问题,而我似乎不是解决方案。以上是关于xlsxwriter 消耗太多内存并且进程被杀死的主要内容,如果未能解决你的问题,请参考以下文章