scrapy之 downloader middleware
Posted regit
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了scrapy之 downloader middleware相关的知识,希望对你有一定的参考价值。
一. 功能说明
Downloader Middleware有三个核心的方法
process_request(request, spider)
process_response(request, response, spider)
process_exception(request, exception, spider)
二. 本次实验实现两个功能
1. 修改请求时的user-agent
方法一:修改settings里面的USER_AGENT变量,加一行USER_AGENT = ‘....‘即可
方法二:修改middleware.py,这里实现得到一个随机的user-agent,在里面定义一个RandomUserAgentMiddleware类,并写一个process_request()函数
2. 修改网页响应时的返回码
在middleware.py中定义一个process_response()函数
三. 具体实现
scrapy startproject httpbintest
cd httpbintest && scrapy genspider httpbin httpbin.org
修改httpbin.py代码
-*- coding: utf-8 -*- import scrapy class HttpbinSpider(scrapy.Spider): name = ‘httpbin‘ allowed_domains = [‘httpbin.org‘] start_urls = [‘http://httpbin.org/get‘] def parse(self, response): # print(response.text) self.logger.debug(response.text) self.logger.debug(‘status code: ‘ + str(response.status))
在middlewares.py添加如下代码
其中的process_request函数是得到一个随机的user-agent; process_response函数是修改网页返回码为201
import random class RandomUserAgentMiddleware(): def __init__(self): self.user_agents = [ ‘Mozilla/5.0 (Windows; U; MSIE 9.0; Windows NT 9.0; en-US)‘, ‘Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.2 (Khtml, like Gecko) Chrome/22.0.1216.0 Safari/537.2‘, ‘Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:15.0) Gecko/20100101 Firefox/15.0.1‘ ] def process_request(self, request, spider): request.headers[‘User-Agent‘] = random.choice(self.user_agents) def process_response(self, request, response, spider): response.status = 201 return response
settings.py中添加如下代码,使上面修改生效
DOWNLOADER_MIDDLEWARES = { ‘httpbintest.middlewares.RandomUserAgentMiddleware‘: 543, }
以上是关于scrapy之 downloader middleware的主要内容,如果未能解决你的问题,请参考以下文章
python爬虫人门Scrapy框架之Downloader Middlewares
Python爬虫从入门到放弃 之 Scrapy框架中Download Middleware用法
Python爬虫从入门到放弃(十七)之 Scrapy框架中Download Middleware用法
Python爬虫从入门到放弃(十七)之 Scrapy框架中Download Middleware用法
Python爬虫从入门到放弃(二十三)之 Scrapy的中间件Downloader Middleware实现User-Agent随机切换