无法从网页中提取连接到“查看全部”按钮的链接
Posted
技术标签:
【中文标题】无法从网页中提取连接到“查看全部”按钮的链接【英文标题】:Can't extract a link connected to `see all` button from a webpage 【发布时间】:2020-09-23 02:21:46 【问题描述】:我创建了一个脚本来使用请求登录到linkedin。脚本运行良好。
登录后,我使用此 URL https://www.linkedin.com/groups/137920/
从那里刮取了此名称 Marketing Intelligence Professionals
,您可以在 this image 中看到它。
脚本可以完美地解析名称。但是,我现在想做的是刮掉与See all
按钮相关的链接,该按钮位于this image 中显示的页面底部。
Group linkyou gotta log in to access the content
到目前为止我已经创建(它可以刮掉第一张图片中显示的名称):
import json
import requests
from bs4 import BeautifulSoup
link = 'https://www.linkedin.com/login?fromSignIn=true&trk=guest_homepage-basic_nav-header-signin'
post_url = 'https://www.linkedin.com/checkpoint/lg/login-submit'
target_url = 'https://www.linkedin.com/groups/137920/'
with requests.Session() as s:
s.headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.1; ) AppleWebKit/537.36 (Khtml, like Gecko) Chrome/83.0.4103.61 Safari/537.36'
r = s.get(link)
soup = BeautifulSoup(r.text,"lxml")
payload = i['name']:i.get('value','') for i in soup.select('input[name]')
payload['session_key'] = 'your email' #put your username here
payload['session_password'] = 'your password' #put your password here
r = s.post(post_url,data=payload)
r = s.get(target_url)
soup = BeautifulSoup(r.text,"lxml")
items = soup.select_one("code:contains('viewerGroupMembership')").get_text(strip=True)
print(json.loads(items)['data']['name']['text'])
如何从那里抓取连接到See all
按钮的链接?
【问题讨论】:
【参考方案1】:当您点击“查看全部”时,会调用一个内部 Rest API:
GET https://www.linkedin.com/voyager/api/search/blended
keywords
查询参数包含您最初请求的群组的标题(初始页面中的群组标题)。
为了获取组名,你可以抓取初始页面的 html,但是当你给出组 ID 时,有一个 API 会返回组信息:
GET https://www.linkedin.com/voyager/api/groups/groups/urn:li:group:GROUP_ID
在您的情况下,组 id 是 137920,可以直接从 URL 中提取
一个例子:
import requests
from bs4 import BeautifulSoup
import re
from urllib.parse import urlencode
username = 'your username'
password = 'your password'
link = 'https://www.linkedin.com/login?fromSignIn=true&trk=guest_homepage-basic_nav-header-signin'
post_url = 'https://www.linkedin.com/checkpoint/lg/login-submit'
target_url = 'https://www.linkedin.com/groups/137920/'
group_res = re.search('.*/(.*)/$', target_url)
group_id = group_res.group(1)
with requests.Session() as s:
# login
s.headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 6.1; ) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36'
r = s.get(link)
soup = BeautifulSoup(r.text,"lxml")
payload = i['name']:i.get('value','') for i in soup.select('input[name]')
payload['session_key'] = username
payload['session_password'] = password
r = s.post(post_url, data = payload)
# API
csrf_token = s.cookies.get_dict()["JSESSIONID"].replace("\"","")
r = s.get(f"https://www.linkedin.com/voyager/api/groups/groups/urn:li:group:group_id",
headers=
"csrf-token": csrf_token
)
group_name = r.json()["name"]["text"]
print(f"searching data for group group_name")
params =
"count": 10,
"keywords": group_name,
"origin": "SWITCH_SEARCH_VERTICAL",
"q": "all",
"start": 0
r = s.get(f"https://www.linkedin.com/voyager/api/search/blended?urlencode(params)&filters=List(resultType-%3EGROUPS)&queryContext=List(spellCorrectionEnabled-%3Etrue)",
headers=
"csrf-token": csrf_token,
"Accept": "application/vnd.linkedin.normalized+json+2.1",
"x-restli-protocol-version": "2.0.0"
)
result = r.json()["included"]
print(result)
print("list of groupName/link")
print([
(t["groupName"], f'https://www.linkedin.com/groups/t["objectUrn"].split(":")[3]')
for t in result
])
一些注意事项:
这些 API 调用需要 cookie 会话 这些 API 调用需要与 JSESSIONID cookie 值相同的 XSRF 令牌的特定标头 搜索调用需要特殊的媒体类型application/vnd.linkedin.normalized+json+2.1
queryContext
和 filters
字段内的括号不应进行 url 编码,否则将不会考虑这些参数
【讨论】:
【参考方案2】:你可以试试selenium,点击See all
按钮,抓取链接的连接内容:
from selenium import webdriver
driver = webdriver.Chrome(chrome_options=options)
driver.get('https://www.linkedin.com/xxxx')
driver.find_element_by_name('s_image').click()
硒文档:https://selenium-python.readthedocs.io/
【讨论】:
我什至没有标记硒,但这个建议还是出现了。我真的不明白如何更具体地回答我的问题。谢谢。以上是关于无法从网页中提取连接到“查看全部”按钮的链接的主要内容,如果未能解决你的问题,请参考以下文章
将 78 个特定代码从创建的表连接到链接表。不能使用 IN() 函数(字符限制),不能做 RI