python之scrapy模块scrapy-redis使用

Posted ywjfx

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了python之scrapy模块scrapy-redis使用相关的知识,希望对你有一定的参考价值。

1、redis的使用,自己可以多学习下,个人也是在学习

https://www.cnblogs.com/ywjfx/p/10262662.html
官网可以自己搜索下。

2、下载安装scrapy-redis

pip install scrapy-redis

3、下载好了,就可以使用了,使用也很简单,只需要在settings.py配置文件添加一下四个

#######redis配置#######
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
SCHEDULER_PERSIST = True
REDIS_URL = "redis://127.0.0.1:6379"
#######redis配置#######

如:settings.py

技术图片
# -*- coding: utf-8 -*-

# Scrapy settings for circ project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html

BOT_NAME = circ

SPIDER_MODULES = [circ.spiders]
NEWSPIDER_MODULE = circ.spiders

LOG_LEVEL = "WARNING"
# Crawl responsibly by identifying yourself (and your website) on the user-agent
USER_AGENT = Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

#######redis配置#######
DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter"
SCHEDULER = "scrapy_redis.scheduler.Scheduler"
SCHEDULER_PERSIST = True
REDIS_URL = "redis://127.0.0.1:6379"
#######redis配置#######
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = 
#   ‘Accept‘: ‘text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8‘,
#   ‘Accept-Language‘: ‘en‘,
#

# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = 
#    ‘circ.middlewares.CircSpiderMiddleware‘: 543,
#

# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = 
#   ‘circ.middlewares.RandomUserAgentMiddleware‘: 543,
#   ‘circ.middlewares.CheckUserAgent‘: 544,
#

# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = 
#    ‘scrapy.extensions.telnet.TelnetConsole‘: None,
#

# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = 
#    ‘circ.pipelines.CircPipeline‘: 300,
#

# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = ‘httpcache‘
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = ‘scrapy.extensions.httpcache.FilesystemCacheStorage‘

USER_AGENTS_LIST = [ "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Win64; x64; Trident/5.0; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 2.0.50727; Media Center PC 6.0)", "Mozilla/5.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0; WOW64; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET CLR 1.0.3705; .NET CLR 1.1.4322)", "Mozilla/4.0 (compatible; MSIE 7.0b; Windows NT 5.2; .NET CLR 1.1.4322; .NET CLR 2.0.50727; InfoPath.2; .NET CLR 3.0.04506.30)", "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN) AppleWebKit/523.15 (KHTML, like Gecko, Safari/419.3) Arora/0.3 (Change: 287 c9dfb30)", "Mozilla/5.0 (X11; U; Linux; en-US) AppleWebKit/527+ (KHTML, like Gecko, Safari/419.3) Arora/0.6", "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.2pre) Gecko/20070215 K-Ninja/2.1.1", "Mozilla/5.0 (Windows; U; Windows NT 5.1; zh-CN; rv:1.9) Gecko/20080705 Firefox/3.0 Kapiko/3.0", "Mozilla/5.0 (X11; Linux i686; U;) Gecko/20070322 Kazehakase/0.4.5" ]
View Code

4、其他的可以不用管了,直接开始scrapy爬虫就可以了

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
import re

class CfSpider(CrawlSpider):
    name = cf
    allowed_domains = [bxjg.circ.gov.cn]
    start_urls = [http://bxjg.circ.gov.cn/web/site0/tab5240/module14430/page1.htm]

    #定义提取url地址规则
    rules = (
        #LinkExtractor 连接提取器,提取url地址
        #callback 提取出来的url地址的response会交给callback处理
        #follow 当前url地址的响应是够重新进过rules来提取url地址,
        Rule(LinkExtractor(allow=r/web/site0/tab5240/info\\d+\\.htm), callback=parse_item),
        Rule(LinkExtractor(allow=r/web/site0/tab5240/module14430/page\\d+\\.htm),follow=True),
    )

    #parse函数有特殊功能,不能定义
    def parse_item(self, response):
        item = 
        item["title"] = re.findall("<!--TitleStart-->(.*?)<!--TitleEnd-->",response.body.decode())[0]
        item["publish_date"] = re.findall("发布时间:(20\\d2-\\d2-\\d2)",response.body.decode())[0]
        #print(item)
        yield item
    #     yield scrapy.Request(
    #         url,
    #         callback=self.parse_detail,
    #         meta = "item":item
    #     )
    #
    # def parse_detail(self,response):
    #     item = response.meta["item"]
    #     item["price"] =  "///"
    #     yield item

5、登陆redis,通过命令查看

keys *

type  "cf:dupefilter"

SMEMBERS "cf:dupefilter"

出现数据,说明成功了

技术图片

 

以上是关于python之scrapy模块scrapy-redis使用的主要内容,如果未能解决你的问题,请参考以下文章

python之scrapy模块scrapy-redis使用

python爬虫之scrapy的pipeline的使用

分布式爬虫

python之scrapy初探

Python网络爬虫之Scrapy框架(CrawlSpider)

Python爬虫之Scrapy框架结构