scrapy--selenium
Posted eilinge
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了scrapy--selenium相关的知识,希望对你有一定的参考价值。
一直在学习scrapy的爬虫知识,但是遇到了动态加载页面的难题,从一开始的javascript渲染器--splash,docker服务,
遇到各种奇葩的问题:
1.docker代理设置添加无效,导致无法拉取splash镜像
2.settings.py中开启splash服务,导致无法ssl连接
然后看见了这个selenium,一开始不太相信有多大的功能,到接触之后,感觉很好用。就给大家稍微介绍下--
根据selenium用法详解:https://blog.csdn.net/qq_29186489/article/details/78661008,添加了其他自己在学习中学习到的内容
下面让我们开始:
selenium 是一个用于Web应用程序测试的工具。Selenium测试直接运行在浏览器中,就像真正的用户在操作一样
selenium用于爬虫,主要是用来解决javascript渲染的问题
详细用法如下:
1:声明浏览器对象
# -*- coding: utf-8 -*-
from selenium import webdriver
import pdb
#声明谷歌、Firefox、Safari等浏览器
browser=webdriver.Chrome()
browser=webdriver.Firefox()
browser=webdriver.Safari()
browser=webdriver.Edge()
browser=webdriver.PhantomJS()
2.访问页面
#_*_coding: utf-8_*_
import time
from selenium import webdriver
browser=webdriver.Chrome()
browser.get("http://www.taobao.com")
time.sleep(3)
print(browser.page_source)
‘‘‘
Traceback (most recent call last):
File "selenium4.py", line 6, in <module>
print(browser.page_source)
File "C:Userswuchan4xAppDataLocalContinuumanaconda2libencodingscp1252.py", line 12, in encode
return codecs.charmap_encode(input,errors,encoding_table)
UnicodeEncodeError: ‘charmap‘ codec can‘t encode characters in position 13799-13802: character maps to <undefined>
‘‘‘
print(browser.page_source.encode(‘utf8‘))
browser.close()
3.查找单个元素
#_*_coding: utf-8_*_
from selenium import webdriver
from selenium.webdriver.common.by import By
browser=webdriver.Chrome()
browser.get("http://www.taobao.com")
input_first=browser.find_element_by_id("q")
input_second=browser.find_element_by_css_selector("#q")
input_third=browser.find_element(By.ID,"q")
print(input_first,input_second,input_third)
‘‘‘
(<selenium.webdriver.remote.webelement.WebElement (session="2846b0462091d95ca6da355fefd70808", element="0.12060594823778437-1")>,
<selenium.webdriver.remote.webelement.WebElement (session="2846b0462091d95ca6da355fefd70808", element="0.12060594823778437-1")>,
<selenium.webdriver.remote.webelement.WebElement (session="2846b0462091d95ca6da355fefd70808", element="0.12060594823778437-1")>)
‘‘‘
browser.close()
4.查找多个元素
#_*_coding: utf-8_*_
from selenium import webdriver
from selenium.webdriver.common.by import By
browser=webdriver.Chrome()
browser.get("http://www.taobao.com")
lis=browser.find_element_by_css_selector("li")
lis_c=browser.find_element(By.CSS_SELECTOR,"li")
#(<selenium.webdriver.remote.webelement.WebElement (session="f326ff15fb184846950679a37c7bc437", element="0.8927328599507083-2")>,
#<selenium.webdriver.remote.webelement.WebElement (session="f326ff15fb184846950679a37c7bc437", element="0.8927328599507083-2")>)
print(lis,lis_c)
browser.close()
5:元素的交互操作
对获取到的元素调用交互方法
#_*_coding: utf-8_*_
from selenium import webdriver
import time
browser=webdriver.Chrome()
browser.get("https://www.taobao.com")
#input=browser.find_element_by_id("q")
#input=browser.find_element_by_xpath(‘//*[@id="J_TSearchForm"]/div[2]/div[3]/div/input‘)
input=browser.find_element_by_class_name("search-combobox-input")
input.send_keys("iPhone")
browser.save_screenshot(‘iphone.png‘)
time.sleep(10)
input.clear()
input.send_keys("iPad")
button=browser.find_element_by_class_name("btn-search")
button.click()
time.sleep(10)
browser.close()
6:交互动作
把动作附加到交互链中
#_*_coding: utf-8_*_
from selenium import webdriver
from selenium.webdriver import ActionChains
import time
from selenium.webdriver.common.alert import Alert
browser=webdriver.Chrome()
url="http://www.runoob.com/try/try.php?filename=jqueryui-api-droppable"
browser.get(url)
#切换到目标元素所在的frame
browser.switch_to.frame("iframeResult")
#确定拖拽目标的起点
source=browser.find_element_by_id("draggable")
#确定拖拽目标的终点
target=browser.find_element_by_id("droppable")
#形成动作链
actions=ActionChains(browser)
actions.drag_and_drop(source,target)
#执行
actions.perform()
‘‘‘
1.先用switch_to_alert()方法切换到alert弹出框上
2.可以用text方法获取弹出的文本 信息
3.accept()点击确认按钮
4.dismiss()相当于点右上角x,取消弹出框
‘‘‘
t=browser.switch_to_alert()
print(t.text)
t.accept()
time.sleep(10)
browser.close()
7:执行javascript
下面的例子是执行就是,拖拽进度条到底,并弹出提示框
#_*_coding: utf-8_*_
from selenium import webdriver
browser=webdriver.Chrome()
browser.get("https://www.zhihu.com/explore")
browser.execute_script("window.scrollTo(0,document.body.scrollHeight)")
browser.execute_script("alert(‘To Button‘)")
browser.close()
8:获取元素信息
获取属性
# -*- coding: utf-8 -*-
from selenium import webdriver
browser=webdriver.Chrome()
url=‘https://www.zhihu.com/explore‘
browser.get(url)
browser.implicitly_wait(10)
logo=browser.find_element_by_id(‘zh-top-link-logo‘)
print(logo)
#<selenium.webdriver.remote.webelement.WebElement (session="4d57a638f4970f803c603c8fd677f77a", element="0.949410104143495-1")>
print(logo.id)
#0.949410104143495-1
print(logo.size)
#{‘width‘: 61, ‘height‘: 45}
print(logo.location)
#{‘y‘: 0.0, ‘x‘: 10.0}
print(logo.tag_name)
#a
print(logo.text)
#u‘u77e5u4e4e‘
browser.close()
9:等待
隐式等待
当使用了隐式等待执行测试的时候,如果webdriver没有在DOM中找到元素,将继续等待,超过设定的时间后则抛出找不到元素的异常,换句话说,当查找元素或元素并没有立即出现的时候,隐式等待将等待一段时间再查找DOM,默认时间为0.
# -*- coding: utf-8 -*-
from selenium import webdriver
browser=webdriver.Chrome()
url="https://www.zhihu.com/explore"
browser.get(url)
browser.implicitly_wait(10)
logo=browser.find_element_by_id("zh-top-link-logo")
print(logo)
browser.close()
显示等待
# -*- coding: utf-8 -*-
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
browser=webdriver.Chrome()
url="https://www.taobao.com"
browser.get(url)
wait=WebDriverWait(browser,10)
input=wait.until(EC.presence_of_element_located((By.ID,"q")))
button=wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR,".btn-search")))
print(input,button)
browser.close()
10:浏览器的前进和后退
# -*- coding: utf-8 -*-
from selenium import webdriver
import time
browser=webdriver.Chrome()
browser.get("https://www.taobao.com")
browser.get("https://www.baidu.com")
browser.get("https://www.python.org")
browser.back()
time.sleep(1)
browser.forward()
browser.close()
11:cookies的处理
# -*- coding: utf-8 -*-
from selenium import webdriver
import time
browser=webdriver.Chrome()
browser.get("https://www.zhihu.com/explore")
print(browser.get_cookies())
browser.add_cookie({"name":"name","domain":"www.zhihu.com","value":"germey"})
print(browser.get_cookies())
browser.delete_all_cookies()
print(browser.get_cookies())
browser.close()
12:选项卡管理
# -*- coding: utf-8 -*-
from selenium import webdriver
import time
browser=webdriver.Chrome()
browser.get("https://www.zhihu.com/explore")
browser.execute_script("window.open()")
print(browser.window_handles)
browser.switch_to_window(browser.window_handles[1])
browser.get("https://www.taobao.com")
time.sleep(1)
browser.switch_to_window(browser.window_handles[0])
browser.get("https://python.org")
browser.close()
13:异常处理
# -*- coding: utf-8 -*-
from selenium import webdriver
from selenium.common.exceptions import TimeoutException,NoSuchElementException
browser=webdriver.Chrome()
try:
browser.get("https://www.zhihu.com/explore")
except TimeoutException:
print("Time out")
try:
browser.find_element_by_id("hello")
except NoSuchElementException:
print("No Element")
finally:
browser.close()
备注(遇到的问题与解决办法):
File"C:Userswuchan4xAppDataLocalContinuumanaconda2libsite-packagesselenium-3.14.0-py2.7.eggseleniumwebdrivercommonservice.py", line 83, in start
os.path.basename(self.path), self.start_error_message)
selenium.common.exceptions.WebDriverException: Message: ‘chromedriver‘ executable needs to be in PATH. Please see https://sites.google.com/a/chromium.org/chromedriver/home
解决办法:将chomedriver.exe放入运行python文件目录下
Traceback (most recent call last):
File "login_.py", line 22, in <module>
print(data)
File "C:Userswuchan4xAppDataLocalContinuumanaconda2libencodingscp1252.py", line 12, in encode
return codecs.charmap_encode(input,errors,encoding_table)
UnicodeEncodeError: ‘charmap‘ codec can‘t encode characters in position 0-1: character maps to <undefined>
解决办法:Print(data.encode(‘utf-8‘))
准确查找该元素位置:
进入下一页面之后,time.sleep(3)的让页面刷新出来
input=browser.find_element_by_id("q")
input=browser.find_element_by_xpath(‘//*[@id="J_TSearchForm"]/div[2]/div[3]/div/input‘)
input=browser.find_element_by_class_name("search-combobox-input")
以上是关于scrapy--selenium的主要内容,如果未能解决你的问题,请参考以下文章