使用 pandas read_html 抓取时将表行分隔为 2
Posted
技术标签:
【中文标题】使用 pandas read_html 抓取时将表行分隔为 2【英文标题】:Separate table row to 2 when scraping with pandas read_html 【发布时间】:2022-01-07 18:22:15 【问题描述】:使用 pandas read_html()
时无法正确获取 行 格式。我正在寻找对方法本身或底层 html(通过 bs4 抓取)进行调整以获得所需的输出。
当前输出:
(注意它是 1 行包含两种类型的数据。理想情况下应该分为 2 行,如下所示)
期望:
复制问题的代码:
import requests
import pandas as pd
from bs4 import BeautifulSoup # alternatively
url = "http://ufcstats.com/fight-details/bb15c0a2911043bd"
df = pd.read_html(url)[-1] # last table
df.columns = [str(i) for i in range(len(df.columns))]
# to get the html via bs4
headers =
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET",
"Access-Control-Allow-Headers": "Content-Type",
"Access-Control-Max-Age": "3600",
"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0",
req = requests.get(url, headers)
soup = BeautifulSoup(req.content, "html.parser")
table_html = soup.find_all("table", "class": "b-fight-details__table")[-1]
【问题讨论】:
【参考方案1】:如何(快速)使用beautifulsoup
修复
您可以使用table
中的标头创建dict
,然后遍历每个td
以附加存储在p
中的值列表:
data =
header = [x.text.strip() for x in table_html.select('tr th')]
for i,td in enumerate(table_html.select('tr:has(td) td')):
data[header[i]] = [x.text.strip() for x in td.select('p')]
pd.DataFrame.from_dict(data)
示例
import requests
import pandas as pd
from bs4 import BeautifulSoup # alternatively
url = "http://ufcstats.com/fight-details/bb15c0a2911043bd"
# to get the html via bs4
headers =
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET",
"Access-Control-Allow-Headers": "Content-Type",
"Access-Control-Max-Age": "3600",
"User-Agent": "Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:52.0) Gecko/20100101 Firefox/52.0",
req = requests.get(url, headers)
soup = BeautifulSoup(req.content, "html.parser")
table_html = soup.find_all("table", "class": "b-fight-details__table")[-1]
data =
header = [x.text.strip() for x in table_html.select('tr th')]
for i,td in enumerate(table_html.select('tr:has(td) td')):
data[header[i]] = [x.text.strip() for x in td.select('p')]
pd.DataFrame.from_dict(data)
输出
Fighter | Sig. str | Sig. str. % | Head | Body | Leg | Distance | Clinch | Ground |
---|---|---|---|---|---|---|---|---|
Joanne Wood | 27 of 68 | 39% | 8 of 36 | 3 of 7 | 16 of 25 | 26 of 67 | 1 of 1 | 0 of 0 |
Taila Santos | 30 of 60 | 50% | 21 of 46 | 3 of 7 | 6 of 7 | 19 of 42 | 0 of 0 | 11 of 18 |
【讨论】:
很好的解决方案,谢谢!我不得不稍微调整一下以在超过 1 行的表格上获得正确的格式。例如,当 url="ufcstats.com/fight-details/18f19b1422e8154b" 时,您的代码会中断。我所做的调整:``` from collections import defaultdict data = defaultdict(list) header = [x.text.strip() for x in table_html.select('tr th')] i =0 for td in table_html.select( 'tr:has(td) td'): data[header[i]].extend([x.text.strip() for x in td.select('p')]) i+=1 if i==len (标题):i = 0 pd.DataFrame.from_dict(数据)```【参考方案2】:类似的想法是使用枚举来确定行数,但使用:-soup-contains
来定位表,然后nth-child
选择器在列表理解期间提取相关行。 pandas
将列表的结果列表转换为 DataFrame。假设添加行的模式与当前 2 相同。
from bs4 import BeautifulSoup as bs
import requests
import pandas as pd
r = requests.get('http://ufcstats.com/fight-details/bb15c0a2911043bd')
soup = bs(r.content, 'lxml')
table = soup.select_one(
'.js-fight-section:has(p:-soup-contains("Significant Strikes")) + table')
df = pd.DataFrame(
[[i.text.strip() for i in table.select(f'tr:nth-child(1) td p:nth-child(n+1)')]
for n, _ in enumerate(table.select('tr:nth-child(1) > td:nth-child(1) > p'))], columns=[i.text.strip() for i in table.select('th')])
print(df)
【讨论】:
以上是关于使用 pandas read_html 抓取时将表行分隔为 2的主要内容,如果未能解决你的问题,请参考以下文章
Pandas pd.read_html() 函数给了我“HTTP 错误 403:禁止”