爬虫 -requests

Posted 风hua

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了爬虫 -requests相关的知识,希望对你有一定的参考价值。

requests模块

- 1.什么是requests模块
    - python原生一个基于网络请求的模块,模拟浏览器发起请求。
- 2.为什么要使用requests模块
    - 1.自动处理url编码 
    - 2.自动处理post请求的参数
    - 3.简化cookie的代理的操作:
        cookie操作:
        - 创建一个cookiejar对象
        - 创建一个handler对象
        - 创建一个operner
        
        代理操作:
        - 创建handler对象,代理ip和端口封装到该对象
        - 创建openner对象
- 3.requests如何被使用
    - 安装:pip install  requests
    - 使用流程:
        - 1.指定url
        - 2.使用requests模块发起请求
        - 3.获取响应数据
        - 4.进行持久化存储
- 4.通过5个基于requests模块的爬虫项目对该模块进行系统学习和巩固
    - get请求
    - post请求
    - ajax的get
    - ajax的post
    - 综合

基于requests模块发起get请求

- 需求:爬取搜狗首页的页面数据
import requests
#指定url
url = \'https://www.sogou.com/\'

#发起get请求:get方法会返回请求成功的相应对象
response = requests.get(url=url)

#获取响应中的数据值:text可以获取响应对象中字符串形式的页面数据
page_data = response.text

print(page_data)

#持久化操作
# with open(\'./sougou.html\',\'w\',encoding=\'utf-8\') as fp:
#     fp.write(page_data)
#response对象中其他重要的属性
import requests
#指定url
url = \'https://www.sogou.com/\'

#发起get请求:get方法会返回请求成功的相应对象
response = requests.get(url=url)

#content获取的是response对象中二进制(byte)类型的页面数据
# print(response.content)
#返回一个响应状态码
# print(response.status_code)
#返回响应头信息
# print(response.headers)
#获取请求的url
# print(response.url)
https://www.sogou.com/
  • requests模块如何处理携带参数的get请求
    方式1:
    • 需求:指定一个词条,获取搜狗搜索结果所对应的页面数据
import requests

url = \'https://www.sogou.com/web?query=周杰伦&ie=utf-8\'

response = requests.get(url=url)

page_text = response.text

with open(\'./zhou.html\',\'w\',encoding=\'utf-8\') as fp:
    fp.write(page_text)
import requests
#自定义请求头信息
headers={
    \'User-Agent\': \'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36\',
    }
#指定url
url = \'https://www.sogou.com/web\'
#封装get请求参数
prams = {
    \'query\':\'周杰伦\',
    \'ie\':\'utf-8\'
}
#发起请求
response = requests.get(url=url,params=param)

response.status_code

# 自定义请求头信息
import requests
url = \'https://www.sogou.com/web\'

#将参数封装到字典中
params = {
    \'query\':\'周杰伦\',
    \'ie\':\'utf-8\'
}
#自定义请求头信息
headers = {
    \'User-Agent\': \'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36\',
}

response = requests.get(url=url,params=params,headers=headers)

response.status_code

200

基于requests模块发起的post请求

- 需求:登录豆瓣网,获取登录成功后的页面数据
import requests

#1.指定post请求的url
url = \'https://accounts.douban.com/login\'

#封装post请求的参数
data = {
    "source": "movie",
    "redir": "https://movie.douban.com/",
    "form_email": "15027900535",
    "form_password": "bobo@15027900535",
    "login": "登录",
}
#自定义请求头信息
headers = {
    \'User-Agent\': \'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36\',
}
#2.发起post请求
response = requests.post(url=url,data=data,headers=headers)

#3.获取响应对象中的页面数据
page_text = response.text

#4.持久化操作
with open(\'./douban.html\',\'w\',encoding=\'utf-8\') as fp:
    fp.write(page_text)

基于ajax的get请求

-需求:抓取豆瓣电影上电影详情的数据

import requests

url = \'https://movie.douban.com/j/chart/top_list?\'

#封装ajax的get请求中携带的参数
params = {
    \'type\':\'5\',
    \'interval_id\':\'100:90\',
    \'action\':\'\',
    \'start\':\'200\',
    \'limit\':\'20\'
}
#自定义请求头信息
headers = {
    \'User-Agent\': \'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36\',
}

response = requests.get(url=url,params=params,headers=headers)

# print(response.text)

基于ajax的post请求

- 需求:爬去肯德基城市餐厅位置数据
import requests

#1指定url
post_url = \'http://www.kfc.com.cn/kfccda/ashx/GetStoreList.ashx?op=keyword\'

#处理post请求的参数
data = {
    "cname": "",
    "pid": "",
    "keyword": "上海",
    "pageIndex": "1",
    "pageSize": "10",
}
#自定义请求头信息
headers = {
    \'User-Agent\': \'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36\',
}

#2发起基于ajax的post请求
response = requests.post(url=post_url,headers=headers,data=data)

response.text
\'{"Table":[{"rowcount":28}],"Table1":[{"rownum":1,"storeName":"开发区上海路","addressDetail":"开发区上海路80号乐天玛特超一楼","pro":"Wi-Fi,礼品卡,生日餐会","provinceName":"江苏省","cityName":"南通市"},{"rownum":2,"storeName":"太仓新区东亭路","addressDetail":"太仓市娄江路与县府街交叉口西南侧(太仓新区上海路北、东亭路东地块)万达广场","pro":"Wi-Fi,店内参观,礼品卡,生日餐会","provinceName":"江苏省","cityName":"苏州市"},{"rownum":3,"storeName":"动力南广场(汇金奥特莱斯B1层)","addressDetail":"石龙路750-3号上海南站地下商场南馆","pro":"精选店,礼品卡","provinceName":"上海市","cityName":"上海市"},{"rownum":4,"storeName":"谷阳","addressDetail":"松汇中路568号上海鹿都商业广场","pro":"Wi-Fi,店内参观,礼品卡,生日餐会,溯源","provinceName":"上海市","cityName":"上海市"},{"rownum":5,"storeName":"南广场","addressDetail":"秣陵路303号上海火车站西南出口一侧","pro":"Wi-Fi,礼品卡","provinceName":"上海市","cityName":"上海市"},{"rownum":6,"storeName":"北广场","addressDetail":"秣陵路303号上海站北广场北立面1F大厅入口西侧位置","pro":"Wi-Fi","provinceName":"上海市","cityName":"上海市"},{"rownum":7,"storeName":"南站精选(火车站站内)","addressDetail":"沪闵路9001号上海南站候车大厅","pro":"精选店,礼品卡","provinceName":"上海市","cityName":"上海市"},{"rownum":8,"storeName":"上海滩","addressDetail":"佘山林湖路888号-1上海滩区5号","pro":"精选店,礼品卡","provinceName":"上海市","cityName":"上海市"},{"rownum":9,"storeName":"动力南精选(3号线2号出口)","addressDetail":"沪闵路9001号上海南站(1F三角地A-1)一层","pro":"精选店,礼品卡","provinceName":"上海市","cityName":"上海市"},{"rownum":10,"storeName":"上海南路","addressDetail":"上海南路3号699生活空间3号楼","pro":"Wi-Fi,店内参观,礼品卡,生日餐会","provinceName":"江西省","cityName":"南昌市"}]}\'

综合项目实战

- 需求:爬取搜狗知乎某一个词条对应一定范围页码表示的页面数据
import os
#前三页页面数据(1,2,3)
import requests
import os

#创建一个文件夹
if not os.path.exists(\'./pages\'):
    os.mkdir(\'./pages\')
    
word = input(\'enter a word:\')

#动态指定页码的范围
start_pageNum = int(input(\'enter a start pageNum:\'))
end_pageNum = int(input(\'enter a end pageNum:\'))
#自定义请求头信息
headers = {
    \'User-Agent\': \'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.100 Safari/537.36\',
}
#1.指定url:设计成一个具有通用的url
url = \'https://zhihu.sogou.com/zhihu\'
for page in range(start_pageNum,end_pageNum+1):
    param = {
        \'query\':word,
        \'page\':page,
        \'ie\':\'utf-8\'
    }
    response = requests.get(url=url,params=param,headers=headers)
    
    #获取响应中的页面数据(指定页码(page))
    page_text = response.text
    
    #进行持久化存储
    fileName = word+str(page)+\'.html\'
    filePath = \'pages/\'+fileName
    with open(filePath,\'w\',encoding=\'utf-8\') as fp:
        fp.write(page_text)
        print(\'第%d页数据写入成功\'%page)
    
    
enter a word:python
enter a start pageNum:3
enter a end pageNum:5
第3页数据写入成功
第4页数据写入成功
第5页数据写入成功
# 爬取百度贴吧指定页码的数据值
# 爬取糗事百科

以上是关于爬虫 -requests的主要内容,如果未能解决你的问题,请参考以下文章

Python爬虫|网络爬虫简介

Python爬虫|网络爬虫简介

Python爬虫|网络爬虫简介

爬虫篇 | 高级爬虫:Scrapy爬虫框架的安装

Python3爬虫 爬虫与反爬虫

爬虫原理与数据抓取-----(了解)通用爬虫和聚焦爬虫