爬虫(selenium)

Posted l736

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了爬虫(selenium)相关的知识,希望对你有一定的参考价值。

selenium

BeautifulSoup:处理速度快,同时可以连续查找,主要用于静态

Selenium:主要用于动态网页,查找速度慢


一、声明浏览器对象  

from selenium import webdriver

browser = webdriver.Chrome()
browser = webdriver.Firefox()
browser = webdriver.Edge()
browser = webdriver.PhantomJS()
browser = webdriver.Safari()

 

 二、与BeautifulSoup取数的区别

(一)一个例子

 

from selenium import webdriver

browser = webdriver.Chrome(r"C:\chromedriver.exe")
browser.get(https://www.zhihu.com/explore)


from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.wait import WebDriverWait

browser = webdriver.Chrome()
try:
    browser.get(https://www.baidu.com)       #相当于res = requests.get(url)和soup = BeautifulSoup(res.text, lxml)
    input = browser.find_element_by_id(kw)   #相当于item=soup.select(".board-item-content a")
    input.send_keys(Python)
    input.send_keys(Keys.ENTER)
    wait = WebDriverWait(browser, 10)
    wait.until(EC.presence_of_element_located((By.ID, content_left)))
    print(browser.current_url)
    print(browser.get_cookies())
    print(browser.page_source)
finally:
    browser.close()

 

 

 

 

(二)步骤详解

1.访问页面 

 1 from selenium import webdriver
 2 import time
 3 
 4 browser=webdriver.Chrome(r"C:\chromedriver.exe")
 5 #在Python中\是转义符,\u表示其后是UNICODE编码,因此\User在这里会报错,在字符串前面加个r表示就可以了
 6 #注意,是chromedrive.exe  而不是chrome.exe。在这里下载http://npm.taobao.org/mirrors/chromedriver/
 7 browser.get(https://www.taobao.com/)
 8 
 9 time.sleep(5)     #网页等待五秒后关闭
10 browser.close() #不输入这行就不会关闭网页

 

2.查找元素 

(1)单个元素

  • find_element_by_name   
  • find_element_by_xpath
  • find_element_by_link_text
  • find_element_by_partial_link_text
  • find_element_by_tag_nam
  • find_element_by_class_name
  • find_element_by_css_selector(find_element_by_id(‘q‘)=find_element(By.ID, ‘q‘))

(2)多个元素

  • find_elements_by_name
  • find_elements_by_xpath
  • find_elements_by_link_text
  • find_elements_by_partial_link_text
  • find_elements_by_tag_name
  • find_elements_by_class_name
  • find_elements_by_css_selector
from selenium import webdriver

browser = webdriver.Chrome()
browser.get(https://www.taobao.com)
input_first = browser.find_element_by_id(q)
input_second = browser.find_element_by_css_selector(#q)
input_third = browser.find_element_by_xpath(//*[@id="q"])
print(input_first, input_second, input_third)
browser.close()

 

3.获取属性

from selenium import webdriver
from selenium.webdriver import ActionChains
?
browser = webdriver.Chrome()
url = https://www.zhihu.com/explore
browser.get(url)
logo = browser.find_element_by_id(zh-top-link-logo)
print(logo)
print(logo.get_attribute(class))

 

4.获取文本值

from selenium import webdriver

browser = webdriver.Chrome()
url = https://www.zhihu.com/explore
browser.get(url)
input = browser.find_element_by_class_name(zu-top-add-question)
print(input.text)

 

5.获取ID、位置、标签名、大小

from selenium import webdriver

browser = webdriver.Chrome()
url = https://www.zhihu.com/explore
browser.get(url)
input = browser.find_element_by_class_name(zu-top-add-question)
print(input.id)
print(input.location)
print(input.tag_name)
print(input.size)

 

6.Frame

import time
from selenium import webdriver
from selenium.common.exceptions import NoSuchElementException

browser = webdriver.Chrome()
url = http://www.runoob.com/try/try.php?filename=jqueryui-api-droppable
browser.get(url)
browser.switch_to.frame(iframeResult)
source = browser.find_element_by_css_selector(#draggable)
print(source)
try:
    logo = browser.find_element_by_class_name(logo)
except NoSuchElementException:
    print(NO LOGO)
browser.switch_to.parent_frame()
logo = browser.find_element_by_class_name(logo)
print(logo)
print(logo.text)

 

三、动态操作

(一)元素交互操作 

对获取的元素调用交互方法

from selenium import webdriver
from selenium.webdriver.common.by import By
import time
browser
=webdriver.Chrome(r"C:\chromedriver.exe") #在Python中\是转义符,\u表示其后是UNICODE编码,因此\User在这里会报错,在字符串前面加个r表示就可以了 #注意,是chromedrive.exe 而不是chrome.exe。在这里下载http://npm.taobao.org/mirrors/chromedriver/ browser.get(https://www.taobao.com/) input=browser.find_element_by_id("q") input.send_keys("DELL") # button=browser.find_element_by_class_name("btn-search") button=browser.find_element_by_css_selector(".btn-search") #两个是等价的 ,<button> 控件 与 <input type="button"> 相比,提供了更为强大的功能和更丰富的内容。 button.click() browser.close()

  更多操作: http://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.remote.webelement

 

(二)交互动作

将动作附加到动作链中串行执行

from selenium import webdriver
from selenium.webdriver import ActionChains
from bs4 import BeautifulSoup
import lxml
import requests

browser=webdriver.Chrome(r"C:\chromedriver.exe")
browser.get("http://www.runoob.com/try/try.php?filename=jqueryui-api-droppable")
browser.switch_to.frame("iframeResult")
#iframe 元素会创建包含另外一个文档的内联框架(即行内框架),这里是跳进里层的这个框架
source=browser.find_element_by_css_selector("#draggable")
target=browser.find_element_by_css_selector("#droppable")
action=ActionChains(browser)
action.drag_and_drop(source,target)
action.perform()

更多操作: http://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.common.action_chains

 

(三)执行javascript

没有提供想执行运作的API

from selenium import webdriver

browser = webdriver.Chrome(r"C:\chromedriver.exe")
browser.get(https://www.zhihu.com/explore)
browser.execute_script(window.scrollTo(0, document.body.scrollHeight))
browser.execute_script(alert("To Bottom"))

执行下拉,下拉到底部后提示

 

(四)等待

1.隐式等待

 

当使用了隐式等待执行测试的时候,如果 WebDriver没有在 DOM中找到元素,将继续等待,超出设定时间后则抛出找不到元素的异常, 换句话说,当查找元素或元素并没有立即出现的时候,隐式等待将等待一段时间再查找 DOM,默认的时间是0

from selenium import webdriver

browser = webdriver.Chrome()
browser.implicitly_wait(10)
browser.get(https://www.zhihu.com/explore)
input = browser.find_element_by_class_name(zu-top-add-question)
print(input)

 

2.显式等待

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC

browser = webdriver.Chrome()
browser.get(https://www.taobao.com/)
wait = WebDriverWait(browser, 10)
input = wait.until(EC.presence_of_element_located((By.ID, q)))
button = wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, .btn-search)))
print(input, button)
  • title_is 标题是某内容
  • title_contains 标题包含某内容
  • presence_of_element_located 元素加载出,传入定位元组,如(By.ID, ‘p‘)
  • visibility_of_element_located 元素可见,传入定位元组
  • visibility_of 可见,传入元素对象
  • presence_of_all_elements_located 所有元素加载出
  • text_to_be_present_in_element 某个元素文本包含某文字
  • text_to_be_present_in_element_value 某个元素值包含某文字
  • frame_to_be_available_and_switch_to_it frame加载并切换
  • invisibility_of_element_located 元素不可见
  • element_to_be_clickable 元素可点击
  • staleness_of 判断一个元素是否仍在DOM,可判断页面是否已经刷新
  • element_to_be_selected 元素可选择,传元素对象
  • element_located_to_be_selected 元素可选择,传入定位元组
  • element_selection_state_to_be 传入元素对象以及状态,相等返回True,否则返回False
  • element_located_selection_state_to_be 传入定位元组以及状态,相等返回True,否则返回False
  • alert_is_present 是否出现Alert
 

详细内容:http://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.support.expected_conditions

 

(五)前进后退

import time
from selenium import webdriver

browser = webdriver.Chrome()
browser.get(https://www.baidu.com/)
browser.get(https://www.taobao.com/)
browser.get(https://www.python.org/)
browser.back()
time.sleep(1)
browser.forward()
browser.close()

 

(六)Cookies

from selenium import webdriver

browser = webdriver.Chrome()
browser.get(https://www.zhihu.com/explore)
print(browser.get_cookies())
browser.add_cookie({name: name, domain: www.zhihu.com, value: germey})
print(browser.get_cookies())
browser.delete_all_cookies()
print(browser.get_cookies())

 

(七)选项卡管理

import time
from selenium import webdriver

browser = webdriver.Chrome()
browser.get(https://www.baidu.com)
browser.execute_script(window.open())
print(browser.window_handles)
browser.switch_to_window(browser.window_handles[1])
browser.get(https://www.taobao.com)
time.sleep(1)
browser.switch_to_window(browser.window_handles[0])
browser.get(https://python.org)

 

(八)异常处理

from selenium import webdriver

browser = webdriver.Chrome()
browser.get(https://www.baidu.com)
browser.find_element_by_id(hello)

 

from selenium import webdriver
from selenium.common.exceptions import TimeoutException, NoSuchElementException

browser = webdriver.Chrome()
try:
    browser.get(https://www.baidu.com)
except TimeoutException:
    print(Time Out)
try:
    browser.find_element_by_id(hello)
except NoSuchElementException:
    print(No Element)
finally:
    browser.close()

详细文档:http://selenium-python.readthedocs.io/api.html#module-selenium.common.exceptions

以上是关于爬虫(selenium)的主要内容,如果未能解决你的问题,请参考以下文章

Python爬虫编程思想(99):使用Selenium执行JavaScript代码

scrapy按顺序启动多个爬虫代码片段(python3)

爬虫请求库——selenium

爬虫----selenium模块

scrapy主动退出爬虫的代码片段(python3)

3爬虫之selenium模块