如何通过使用SeleniumWebdriver和Python滚动查找网页上的所有元素
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了如何通过使用SeleniumWebdriver和Python滚动查找网页上的所有元素相关的知识,希望对你有一定的参考价值。
我似乎无法在网页上获取所有元素。无论我尝试过使用硒。我相信我错过了一些东西。这是我的代码。每当我刮掉6个元素时,url至少有30个元素。我错过了什么?
import requests
import webbrowser
import time
from bs4 import BeautifulSoup as bs
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.common.exceptions import NoSuchElementException
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'}
url = 'https://www.adidas.com/us/men-shoes-new_arrivals'
res = requests.get(url, headers = headers)
page_soup = bs(res.text, "html.parser")
containers = page_soup.findAll("div", {"class": "gl-product-card-container show-variation-carousel"})
print(len(containers))
#for each container find shoe model
shoe_colors = []
for container in containers:
if container.find("div", {'class': 'gl-product-card__reviews-number'}) is not None:
shoe_model = container.div.div.img["title"]
review = container.find('div', {'class':'gl-product-card__reviews-number'})
review = int(review.text)
driver = webdriver.Chrome()
driver.get(url)
time.sleep(5)
shoe_prices = driver.find_elements_by_css_selector('.gl-price')
for price in shoe_prices:
print(price.text)
print(len(shoe_prices))
答案
因此,使用代码试用时结果似乎有些不同:
- 您可以找到30个带有请求的项目和6个带有Selenium的项目
- 我在哪里找到了40个带有请求的项目和4个带有Selenium的项目
这个网站上的这些项目是通过Lazy Loading动态生成的,所以你必须scrollDown
并等待在HTML DOM中渲染的新元素,你可以使用以下解决方案:
- 代码块:
import requests import webbrowser from bs4 import BeautifulSoup as bs from selenium import webdriver from selenium.webdriver.chrome.options import Options from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC from selenium.webdriver.common.by import By from selenium.common.exceptions import NoSuchElementException, TimeoutException headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/61.0.3163.100 Safari/537.36'} url = 'https://www.adidas.com/us/men-shoes-new_arrivals' res = requests.get(url, headers = headers) page_soup = bs(res.text, "html.parser") containers = page_soup.findAll("div", {"class": "gl-product-card-container show-variation-carousel"}) print(len(containers)) shoe_colors = [] for container in containers: if container.find("div", {'class': 'gl-product-card__reviews-number'}) is not None: shoe_model = container.div.div.img["title"] review = container.find('div', {'class':'gl-product-card__reviews-number'}) review = int(review.text) options = Options() options.add_argument('start-maximized') options.add_argument('disable-infobars') options.add_argument('--disable-extensions') driver = webdriver.Chrome(chrome_options=options, executable_path=r'C:WebDriverschromedriver.exe') driver.get(url) myLength = len(WebDriverWait(driver, 20).until(EC.visibility_of_all_elements_located((By.CSS_SELECTOR, "span.gl-price")))) while True: driver.execute_script("window.scrollBy(0,400)", "") try: WebDriverWait(driver, 20).until(lambda driver: len(driver.find_elements_by_css_selector("span.gl-price")) > myLength) titles = driver.find_elements_by_css_selector("span.gl-price") myLength = len(titles) except TimeoutException: break print(myLength) for title in titles: print(title.text) driver.quit()
- 控制台输出:
47 $100 $100 $100 $100 $100 $100 $180 $180 $180 $180 $130 $180 $180 $130 $180 $130 $200 $180 $180 $130 $60 $100 $30 $65 $120 $100 $85 $180 $150 $130 $100 $100 $80 $100 $120 $180 $200 $130 $130 $100 $120 $120 $100 $180 $90 $140 $100
另一答案
你必须慢慢向下滚动页面。它仅在查看产品时使用ajax请求价格数据。
options = Options()
options.add_argument('--start-maximized')
driver = webdriver.Chrome(options=options)
url = 'https://www.adidas.com/us/men-shoes-new_arrivals'
driver.get(url)
scroll_times = len(driver.find_elements_by_class_name('col-s-6')) / 4 # (divide by 4 column product per row)
scrolled = 0
scroll_size = 400
while scrolled < scroll_times:
driver.execute_script('window.scrollTo(0, arguments[0]);', scroll_size)
scrolled +=1
scroll_size += 400
time.sleep(1)
shoe_prices = driver.find_elements_by_class_name('gl-price')
for price in shoe_prices:
print(price.text)
print(len(shoe_prices))
以上是关于如何通过使用SeleniumWebdriver和Python滚动查找网页上的所有元素的主要内容,如果未能解决你的问题,请参考以下文章
如何使用通过VPN工作的selenium webdriver访问应用程序?
使用 Selenium Webdriver 通过它包含的文本选择一个元素
JMeter如何联合Selenium WebDriver进行自动化测试?
如何使用 Selenium Webdriver 模拟特定用户?