一个超实用的python爬虫功能使用 requests BeautifulSoup

Posted duoba

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了一个超实用的python爬虫功能使用 requests BeautifulSoup相关的知识,希望对你有一定的参考价值。

import urllib

import os,re
from urllib import request, parse
import requests
import random
import time
from bs4 import BeautifulSoup

user_agent_list = [
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (Khtml, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
"Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
]

for xx in range(0):
  UA = random.choice(user_agent_list) ##从self.user_agent_list中随机取出一个字符串
  headers = ‘User-Agent‘: UA ##构造成一个完整的User-Agent (UA代表的是上面随机取出来的字符串哦)

  time.sleep(random.randint(1,15))  #休息几秒钟,访问太快了,会封ip的
  url0=‘https://www.ruyile.com/xuexiao/?a=2&p=‘+str(xx+1)

  html = requests.get(url0, headers=headers)
  soup = BeautifulSoup(html.content,‘lxml‘)

  links = soup.find_all(‘div‘,class_=‘sk‘)
  for link in links:
    print (link.a.get_text())

以上是关于一个超实用的python爬虫功能使用 requests BeautifulSoup的主要内容,如果未能解决你的问题,请参考以下文章

Python "爬虫"出发前的装备之简单实用的 Requests 模块

Python "爬虫"出发前的装备之简单实用的 Requests 模块

python爬虫用啥库

Python 一个超快的公共情报搜集爬虫 — Photon

超详细,Python库 Bokeh 数据可视化实用指南

python超精简博客园爬虫(果然比C#好用的多)