Pandas:在 500 万行上使用 Apply 和正则表达式字符串匹配
Posted
技术标签:
【中文标题】Pandas:在 500 万行上使用 Apply 和正则表达式字符串匹配【英文标题】:Pandas: Using Apply and regex string matching on 5 million rows 【发布时间】:2017-12-14 09:13:51 【问题描述】:问题:我正在尝试根据 description
列对数据框的每一行进行适当分类。为此,我想根据常用词列表提取关键词。首先,我将关键短语分成单词(即“Food Store”变成“Food”和“Store”)。然后,我检查我的数据框中的任何行是否同时包含“食物”和“商店”这两个词。不幸的是,我生成的代码太慢了。如何优化它以处理 500 万行数据?
样本数据:
这是我的数据框的前 30 行:
bank_report_id transaction_date amount description type_codes category
0 14698 2016-04-26 -3.00 Simply Save TD EVERY DAY SAVINGS ACCOUNT xxxxx... Savings
1 14698 2016-04-25 -110.00 ROGERSWL 1TIME _V Uncategorized
2 14698 2016-04-25 -10.50 SUBWAY # x6664 Restaurants/Dining
3 14698 2016-04-25 -1.00 Simply Save TD EVERY DAY SAVINGS ACCOUNT xxxxx... Savings
4 14698 2016-04-25 -73.75 TICKETMASTER CA Entertainment
5 14698 2016-04-25 -6.20 HAPPY ONE STOP Home Improvement
6 14698 2016-04-25 -7.74 BOOSTERJUICE-19 Restaurants/Dining
7 14698 2016-04-25 -28.49 LEISURE-FIRST O Uncategorized
8 14698 2016-04-22 -3.16 MCDONALD'S #400 Restaurants/Dining
9 14698 2016-04-22 -0.50 Simply Save TD EVERY DAY SAVINGS ACCOUNT xxxxx... Savings
10 14698 2016-04-22 -10.50 SUBWAY # x6664 Restaurants/Dining
11 14698 2016-04-21 -19.87 TRAFALGAR ESSO Gasoline/Fuel
12 14698 2016-04-21 -1.00 Simply Save TD EVERY DAY SAVINGS ACCOUNT xxxxx... Savings
13 14698 2016-04-20 -3.76 MCDONALD'S #400 Restaurants/Dining
14 14698 2016-04-20 -1.00 Simply Save TD EVERY DAY SAVINGS ACCOUNT xxxxx... Savings
15 14698 2016-04-20 -40.00 TRAFALGAR ESSO Gasoline/Fuel
16 14698 2016-04-19 -10.07 TRAFALGAR ESSO Gasoline/Fuel
17 14698 2016-04-19 -5.21 TIM HORTONS #24 Restaurants/Dining
18 14698 2016-04-19 -3.50 Simply Save TD EVERY DAY SAVINGS ACCOUNT xxxxx... Savings
19 14698 2016-04-18 -1.00 Simply Save TD EVERY DAY SAVINGS ACCOUNT xxxxx... Savings
20 14698 2016-04-18 -5.21 TIM HORTONS #24 Restaurants/Dining
21 14698 2016-04-18 -22.57 WAL-MART #3170 General Merchandise
22 14698 2016-04-18 -16.94 URBAN PLANET #1 Clothing/Shoes
23 14698 2016-04-18 -12.95 LCBO/RAO #0545 Restaurants/Dining
24 14698 2016-04-18 -13.87 TRAFALGAR ESSO Gasoline/Fuel
25 14698 2016-04-18 -41.75 NON-TD ATM W/D ATM/Cash Withdrawals
26 14698 2016-04-18 -4.19 SUBWAY # x6338 Restaurants/Dining
27 14698 2016-04-15 -0.50 Simply Save TD EVERY DAY SAVINGS ACCOUNT xxxxx... Savings
28 14698 2016-04-15 -35.06 UNION BURGER Restaurants/Dining
29 14698 2016-04-15 -25.00 PIONEER STN #1 Electronics
这里是单词列表的一小部分:
['Exxon Mobil', 'Shell', 'Food Store', 'Pizza', 'Walgreens', 'Payday Loan', 'NSF', 'Lincoln', 'Apartment', 'Homes']
我的解决方案尝试:
def get_matches(row):
keywords = pd.read_csv('Keywords.csv', encoding='ISO-8859-1')['description'].apply(lambda x: x.lower()).str.split(
" ").tolist()
split_description = [d.lower() for d in row['description'].split(" ")]
thematches = []
for group in keywords:
matches = [any([bool(re.search(y, x)) for x in split_description]) for y in group]
if all(matches):
thematches.append(" ".join(group))
if len(thematches) > 0:
return thematches
else:
return "NA"
df['match'] = df.apply(get_matches, axis=1)
期望的输出:
bank_report_id transaction_date amount description type_codes category match
0 14698 2016-04-26 -3.00 Simply Save TD EVERY DAY SAVINGS ACCOUNT xxxxx... Savings [simply save]
1 14698 2016-04-25 -110.00 ROGERSWL 1TIME _V Uncategorized [rogers]
2 14698 2016-04-25 -10.50 SUBWAY # x6664 Restaurants/Dining [subway]
3 14698 2016-04-25 -1.00 Simply Save TD EVERY DAY SAVINGS ACCOUNT xxxxx... Savings [simply save]
4 14698 2016-04-25 -73.75 TICKETMASTER CA Entertainment [ticket master]
5 14698 2016-04-25 -6.20 HAPPY ONE STOP Home Improvement NA
6 14698 2016-04-25 -7.74 BOOSTERJUICE-19 Restaurants/Dining [juice]
7 14698 2016-04-25 -28.49 LEISURE-FIRST O Uncategorized NA
8 14698 2016-04-22 -3.16 MCDONALD'S #400 Restaurants/Dining [mcdonald's]
9 14698 2016-04-22 -0.50 Simply Save TD EVERY DAY SAVINGS ACCOUNT xxxxx... Savings [simply save]
10 14698 2016-04-22 -10.50 SUBWAY # x6664 Restaurants/Dining [subway]
11 14698 2016-04-21 -19.87 TRAFALGAR ESSO Gasoline/Fuel [esso]
12 14698 2016-04-21 -1.00 Simply Save TD EVERY DAY SAVINGS ACCOUNT xxxxx... Savings [simply save]
13 14698 2016-04-20 -3.76 MCDONALD'S #400 Restaurants/Dining [mcdonald's]
14 14698 2016-04-20 -1.00 Simply Save TD EVERY DAY SAVINGS ACCOUNT xxxxx... Savings [simply save]
15 14698 2016-04-20 -40.00 TRAFALGAR ESSO Gasoline/Fuel [esso]
16 14698 2016-04-19 -10.07 TRAFALGAR ESSO Gasoline/Fuel [esso]
17 14698 2016-04-19 -5.21 TIM HORTONS #24 Restaurants/Dining [tim hortons, rt]
18 14698 2016-04-19 -3.50 Simply Save TD EVERY DAY SAVINGS ACCOUNT xxxxx... Savings [simply save]
19 14698 2016-04-18 -1.00 Simply Save TD EVERY DAY SAVINGS ACCOUNT xxxxx... Savings [simply save]
20 14698 2016-04-18 -5.21 TIM HORTONS #24 Restaurants/Dining [tim hortons, rt]
21 14698 2016-04-18 -22.57 WAL-MART #3170 General Merchandise [rt]
22 14698 2016-04-18 -16.94 URBAN PLANET #1 Clothing/Shoes [urban planet]
23 14698 2016-04-18 -12.95 LCBO/RAO #0545 Restaurants/Dining NA
24 14698 2016-04-18 -13.87 TRAFALGAR ESSO Gasoline/Fuel [esso]
25 14698 2016-04-18 -41.75 NON-TD ATM W/D ATM/Cash Withdrawals NA
26 14698 2016-04-18 -4.19 SUBWAY # x6338 Restaurants/Dining [subway]
27 14698 2016-04-15 -0.50 Simply Save TD EVERY DAY SAVINGS ACCOUNT xxxxx... Savings [simply save]
28 14698 2016-04-15 -35.06 UNION BURGER Restaurants/Dining [burger]
29 14698 2016-04-15 -25.00 PIONEER STN #1 Electronics [pioneer]
【问题讨论】:
您可以构建一个aho-corasick 自动机来大幅提高搜索速度。 【参考方案1】:你可以试试这样的:
df['match'] = df['description type_codes'].apply(lambda x: [l for l in match_list if l.lower() in x.lower()])
使用pandas.map 和list comprehension 总是比显式循环迭代更快。
如果您在没有匹配项的地方不喜欢 []
,您可以使用它来将它们更改为 np.nan
或任何您喜欢的:
df['match'] = df.match.apply(lambda y: np.nan if len(y)==0 else y)
有关使用 pandas 提升性能的更多信息,您应该访问以下链接:
topic
document
输出:
# only the interesting column
0 [simply save]
1 [rogers]
2 [subway]
3 [simply save]
4 NaN
5 NaN
6 [juice]
7 NaN
8 [mcdonald's]
9 [simply save]
10 [subway]
11 [esso]
12 [simply save]
13 [mcdonald's]
14 [simply save]
15 [esso]
16 [esso]
17 [tim hortons, rt]
18 [simply save]
19 [simply save]
20 [tim hortons, rt]
21 [rt]
22 [urban planet]
23 NaN
24 [esso]
25 NaN
26 [subway]
27 [simply save]
28 [burger]
29 [pioneer]
希望这对您有所帮助。
【讨论】:
【参考方案2】:我会做两件事:
由于您只使用'description'
列,请尝试将其导出为列表df.description.tolist()
。使用此列表进行字符串处理,然后您可以pd.concat
您的结果。我相信这可以消除pandas
的开销。
Numpy
数组被认为更加优化,但是,我不太确定字符串操作是否真的如此。不过你也可以试试看。
并行化您的代码。 joblib
提供了一个非常简单的界面。 (https://pythonhosted.org/joblib/parallel.html)
【讨论】:
以上是关于Pandas:在 500 万行上使用 Apply 和正则表达式字符串匹配的主要内容,如果未能解决你的问题,请参考以下文章
如何加快在 2500 万行上还包含 JOIN 的 SQL UPDATE
pandas使用apply函数:在dataframe数据行(row)上施加(apply)函数
pandas使用apply函数:在dataframe数据列(column)上施加(apply)函数