Scrapy框架的八个扩展

Posted linyuhong

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Scrapy框架的八个扩展相关的知识,希望对你有一定的参考价值。

一、proxies代理

首先需要在环境变量中设置

from scrapy.contrib.downloadermiddleware.httpproxy import HttpProxyMiddleware

方式一:使用默认

os.environ
{
     http_proxy:http://root:[email protected]:9999/
     https_proxy:http://192.168.11.11:9999/
}

缺点:原生代理是把代理放在python环境变量里面,也就是要依赖于python环境变量,要用的时候然后再去变量里面搜索,一个个分割字符进行匹配,效率低,low。

方式二:使用自定义下载中间件

技术分享图片
def to_bytes(text, encoding=None, errors=strict):
        if isinstance(text, bytes):
            return text
        if not isinstance(text, six.string_types):
            raise TypeError(to_bytes must receive a unicode, str or bytes 
                            object, got %s % type(text).__name__)
        if encoding is None:
            encoding = utf-8
        return text.encode(encoding, errors)

class ProxyMiddleware(object):
        def process_request(self, request, spider):
            PROXIES = [
                {ip_port: 111.11.228.75:80, user_pass: ‘‘},
                {ip_port: 120.198.243.22:80, user_pass: ‘‘},
                {ip_port: 111.8.60.9:8123, user_pass: ‘‘},
                {ip_port: 101.71.27.120:80, user_pass: ‘‘},
                {ip_port: 122.96.59.104:80, user_pass: ‘‘},
                {ip_port: 122.224.249.122:8088, user_pass: ‘‘},
            ]
            proxy = random.choice(PROXIES)
            if proxy[user_pass] is not None:
                request.meta[proxy] = to_bytes("http://%s" % proxy[ip_port])
                encoded_user_pass = base64.encodestring(to_bytes(proxy[user_pass]))
                request.headers[Proxy-Authorization] = to_bytes(Basic  + encoded_user_pass)
                print "**************ProxyMiddleware have pass************" + proxy[ip_port]
            else:
                print "**************ProxyMiddleware no pass************" + proxy[ip_port]
                request.meta[proxy] = to_bytes("http://%s" % proxy[ip_port])
    
DOWNLOADER_MIDDLEWARES = {
    step8_king.middlewares.ProxyMiddleware: 500,
}
自定义proxies

 

二、Https证书

Https访问时有两种情况:
1. 要爬取网站使用的可信任证书(默认支持)

 DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
 DOWNLOADER_CLIENTCONTEXTFACTORY = "scrapy.core.downloader.contextfactory.ScrapyClientContextFactory"      

 

2. 要爬取网站使用的自定义证书

技术分享图片
DOWNLOADER_HTTPCLIENTFACTORY = "scrapy.core.downloader.webclient.ScrapyHTTPClientFactory"
        DOWNLOADER_CLIENTCONTEXTFACTORY = "step8_king.https.MySSLFactory"
        
        # https.py
        from scrapy.core.downloader.contextfactory import ScrapyClientContextFactory
        from twisted.internet.ssl import (optionsForClientTLS, CertificateOptions, PrivateCertificate)
        
        class MySSLFactory(ScrapyClientContextFactory):
            def getCertificateOptions(self):
                from OpenSSL import crypto
                v1 = crypto.load_privatekey(crypto.FILETYPE_PEM, open(/Users/wupeiqi/client.key.unsecure, mode=r).read())
                v2 = crypto.load_certificate(crypto.FILETYPE_PEM, open(/Users/wupeiqi/client.pem, mode=r).read())
                return CertificateOptions(
                    privateKey=v1,  # pKey对象
                    certificate=v2,  # X509对象
                    verify=False,
                    method=getattr(self, method, getattr(self, _ssl_method, None))
                )
    其他:
        相关类
            scrapy.core.downloader.handlers.http.HttpDownloadHandler
            scrapy.core.downloader.webclient.ScrapyHTTPClientFactory
            scrapy.core.downloader.contextfactory.ScrapyClientContextFactory
        相关配置
            DOWNLOADER_HTTPCLIENTFACTORY
            DOWNLOADER_CLIENTCONTEXTFACTORY
自定义Https证书

 

三、缓存

# 目的用于将已经发送的请求或相应缓存下来,以便以后使用
from scrapy.downloadermiddlewares.httpcache import HttpCacheMiddleware
from scrapy.extensions.httpcache import DummyPolicy
from scrapy.extensions.httpcache import FilesystemCacheStorage

 

技术分享图片
# 是否启用缓存策略
# HTTPCACHE_ENABLED = True

# 缓存策略:所有请求均缓存,下次在请求直接访问原来的缓存即可
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.DummyPolicy"
# 缓存策略:根据Http响应头:Cache-Control、Last-Modified 等进行缓存的策略
# HTTPCACHE_POLICY = "scrapy.extensions.httpcache.RFC2616Policy"

# 缓存超时时间
# HTTPCACHE_EXPIRATION_SECS = 0

# 缓存保存路径
# HTTPCACHE_DIR = ‘httpcache‘

# 缓存忽略的Http状态码
# HTTPCACHE_IGNORE_HTTP_CODES = []

# 缓存存储的插件
# HTTPCACHE_STORAGE = ‘scrapy.extensions.httpcache.FilesystemCacheStorage‘
缓存 

四、下载中间件

技术分享图片
class DownMiddleware1(object):
        def process_request(self, request, spider):
            ‘‘‘
            请求需要被下载时,经过所有下载器中间件的process_request调用
            :param request:
            :param spider:
            :return:
                None,继续后续中间件去下载;
                Response对象,停止process_request的执行,开始执行process_response
                Request对象,停止中间件的执行,将Request重新调度器
                raise IgnoreRequest异常,停止process_request的执行,开始执行process_exception
            ‘‘‘
            pass
    
    
    
        def process_response(self, request, response, spider):
            ‘‘‘
            spider处理完成,返回时调用
            :param response:
            :param result:
            :param spider:
            :return:
                Response 对象:转交给其他中间件process_response
                Request 对象:停止中间件,request会被重新调度下载
                raise IgnoreRequest 异常:调用Request.errback
            ‘‘‘
            print(response1)
            return response
    
        def process_exception(self, request, exception, spider):
            ‘‘‘
            当下载处理器(download handler)或 process_request() (下载中间件)抛出异常
            :param response:
            :param exception:
            :param spider:
            :return:
                None:继续交给后续中间件处理异常;
                Response对象:停止后续process_exception方法
                Request对象:停止中间件,request将会被重新调用下载
            ‘‘‘
            return None

    
    默认下载中间件
    {
        scrapy.contrib.downloadermiddleware.robotstxt.RobotsTxtMiddleware: 100,
        scrapy.contrib.downloadermiddleware.httpauth.HttpAuthMiddleware: 300,
        scrapy.contrib.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware: 350,
        scrapy.contrib.downloadermiddleware.useragent.UserAgentMiddleware: 400,
        scrapy.contrib.downloadermiddleware.retry.RetryMiddleware: 500,
        scrapy.contrib.downloadermiddleware.defaultheaders.DefaultHeadersMiddleware: 550,
        scrapy.contrib.downloadermiddleware.redirect.MetaRefreshMiddleware: 580,
        scrapy.contrib.downloadermiddleware.httpcompression.HttpCompressionMiddleware: 590,
        scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware: 600,
        scrapy.contrib.downloadermiddleware.cookies.CookiesMiddleware: 700,
        scrapy.contrib.downloadermiddleware.httpproxy.HttpProxyMiddleware: 750,
        scrapy.contrib.downloadermiddleware.chunked.ChunkedTransferMiddleware: 830,
        scrapy.contrib.downloadermiddleware.stats.DownloaderStats: 850,
        scrapy.contrib.downloadermiddleware.httpcache.HttpCacheMiddleware: 900,
    }

"""
# from scrapy.contrib.downloadermiddleware.httpauth import HttpAuthMiddleware
# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
# DOWNLOADER_MIDDLEWARES = {
#    ‘step8_king.middlewares.DownMiddleware1‘: 100,
#    ‘step8_king.middlewares.DownMiddleware2‘: 500,
# }
下载中间件

 

五、爬虫中间件

技术分享图片
class SpiderMiddleware(object):

        def process_spider_input(self,response, spider):
            ‘‘‘
            下载完成,执行,然后交给parse处理
            :param response: 
            :param spider: 
            :return: 
            ‘‘‘
            pass
    
        def process_spider_output(self,response, result, spider):
            ‘‘‘
            spider处理完成,返回时调用
            :param response:
            :param result:
            :param spider:
            :return: 必须返回包含 Request 或 Item 对象的可迭代对象(iterable)
            ‘‘‘
            return result
    
        def process_spider_exception(self,response, exception, spider):
            ‘‘‘
            异常调用
            :param response:
            :param exception:
            :param spider:
            :return: None,继续交给后续中间件处理异常;含 Response 或 Item 的可迭代对象(iterable),交给调度器或pipeline
            ‘‘‘
            return None
    
    
        def process_start_requests(self,start_requests, spider):
            ‘‘‘
            爬虫启动时调用
            :param start_requests:
            :param spider:
            :return: 包含 Request 对象的可迭代对象
            ‘‘‘
            return start_requests
    
    内置爬虫中间件:
        scrapy.contrib.spidermiddleware.httperror.HttpErrorMiddleware: 50,
        scrapy.contrib.spidermiddleware.offsite.OffsiteMiddleware: 500,
        scrapy.contrib.spidermiddleware.referer.RefererMiddleware: 700,
        scrapy.contrib.spidermiddleware.urllength.UrlLengthMiddleware: 800,
        scrapy.contrib.spidermiddleware.depth.DepthMiddleware: 900,

"""
# from scrapy.contrib.spidermiddleware.referer import RefererMiddleware
# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
SPIDER_MIDDLEWARES = {
   # ‘step8_king.middlewares.SpiderMiddleware‘: 543,
}
爬虫中间件

 

六、pipelines扩展

技术分享图片
from scrapy.exceptions import DropItem

class CustomPipeline(object):
    def __init__(self,v):
        self.value = v

    def process_item(self, item, spider):
        # 操作并进行持久化

        # return表示会被后续的pipeline继续处理
        return item

        # 表示将item丢弃,不会被后续pipeline处理
        # raise DropItem()


    @classmethod
    def from_crawler(cls, crawler):
        """
        初始化时候,用于创建pipeline对象
        :param crawler: 
        :return: 
        """
        val = crawler.settings.getint(MMMM)
        return cls(val)

    def open_spider(self,spider):
        """
        爬虫开始执行时,调用
        :param spider: 
        :return: 
        """
        print(000000)

    def close_spider(self,spider):
        """
        爬虫关闭时,被调用
        :param spider: 
        :return: 
        """
        print(111111)

自定义pipeline
pipelines扩展

 

七、exception信号量处理

技术分享图片
from scrapy import signals


class MyExtension(object):
    def __init__(self, value):
        self.value = value

    @classmethod
    def from_crawler(cls, crawler):
        val = crawler.settings.getint(MMMM)
        ext = cls(val)

        crawler.signals.connect(ext.spider_opened, signal=signals.spider_opened)
        crawler.signals.connect(ext.spider_closed, signal=signals.spider_closed)

        return ext

    def spider_opened(self, spider):
        print(open)

    def spider_closed(self, spider):
        print(close)
extension信号量处理

 

八、url的去重

技术分享图片
class RepeatUrl:
    def __init__(self):
        self.visited_url = set()

    @classmethod
    def from_settings(cls, settings):
        """
        初始化时,调用
        :param settings: 
        :return: 
        """
        return cls()

    def request_seen(self, request):
        """
        检测当前请求是否已经被访问过
        :param request: 
        :return: True表示已经访问过;False表示未访问过
        """
        if request.url in self.visited_url:
            return True
        self.visited_url.add(request.url)
        return False

    def open(self):
        """
        开始爬去请求时,调用
        :return: 
        """
        print(open replication)

    def close(self, reason):
        """
        结束爬虫爬取时,调用
        :param reason: 
        :return: 
        """
        print(close replication)

    def log(self, request, spider):
        """
        记录日志
        :param request: 
        :param spider: 
        :return: 
        """
        print(repeat, request.url)

自定义URL去重操作
url去重

 

小扩展,关于Scrapy默认的URL去重,只是简单的把URL加到集合set()里面,此外还有另一种更好的去重方法,是Scrapy_Redis中使用的,具体步骤为:

- 使用sha1加密request得到指纹
- 把指纹存在redis的集合中
- 下一次新来一个request,同样的方式生成指纹,判断指纹是否存在reids的集合中

实现的代码

fp = hashlib.sha1()
fp.update(to_bytes(request.method))  #请求方法
fp.update(to_bytes(canonicalize_url(request.url))) #url
fp.update(request.body or b‘‘)  #请求体
return fp.hexdigest()
added = self.server.sadd(self.key, fp)
return added != 0

 





以上是关于Scrapy框架的八个扩展的主要内容,如果未能解决你的问题,请参考以下文章

玩转Asp.net MVC 的八个扩展点(上)

Python 的八个实用的“无代码”特性

构建跨浏览器兼容网站的八个技巧

让你喜欢的八个 PHP 网页爬虫库与工具

Python 的八个实用的“无代码”特性

记一次接口性能优化实践总结:优化接口性能的八个建议