scrapy爬虫框架setting模块解析

Posted 天宇之游

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了scrapy爬虫框架setting模块解析相关的知识,希望对你有一定的参考价值。

平时写爬虫的时候并不需要设置setting里所有的参数,今天心血来潮,花了点时间查了一下setting模块创建后自动写入的所有参数的含义,记录一下。

  • 模块相关说明信息
# -*- coding: utf-8 -*-

# Scrapy settings for new_center project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     http://doc.scrapy.org/en/latest/topics/settings.html
#     http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#     http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
  • 项目名字和爬虫模块说明,引擎根据这个信息找到爬虫
BOT_NAME = ‘new_center‘  # 项目名字

SPIDER_MODULES = [‘new_center.spiders‘]
NEWSPIDER_MODULE = ‘new_center.spiders‘
  • 浏览器的USER_AGENT,可以自定义伪装。
# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = ‘new_center (+http://www.yourdomain.com)‘
  • 是否遵守robots协议,默认是遵守的,可以改成False或将其注释
# Obey robots.txt rules
ROBOTSTXT_OBEY = True
  • 设置scrapy爬虫最大的并发请求数量,默认是16
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32
  • 设置对访问同一个网站进行请求的延时时间,默认是0.
# Configure a delay for requests for the same website (default: 0)
# See http://scrapy.readthedocs.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
  • 设置对每个网站和每个IP的最大并发请求数量,两个最好只设置一个,如果都设置,则按照限制IP生效。
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
  • 设置是否禁用cookie,目前默认是可用的,去掉注释则禁用
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
  • 设置是否可远程登录控制台,目前默认是可以的,去掉注释则禁用
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
  • 用来设置请求头,一般不用,因为请求头可以动态设置
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   ‘Accept‘: ‘text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8‘,
#   ‘Accept-Language‘: ‘en‘,
#}
  • 是否开启使用爬虫spider的中间件,默认不启用,解除注释后启用,后面的数字代表优先级,数字越小,优先级越高
# Enable or disable spider middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    ‘new_center.middlewares.NewCenterSpiderMiddleware‘: 543,
#}
  • 是否开启爬虫下载器的中间件,默认不启用,解除注释后启用
# Enable or disable downloader middlewares
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    ‘new_center.middlewares.MyCustomDownloaderMiddleware‘: 543,
#}
  • 是否禁用爬虫扩展,默认禁用,解除注释后将None改成数字,如500,扩展的优先级一般不重要,因为他们并不相互依赖,多个扩展的value值可以写相同。
# Enable or disable extensions
# See http://scrapy.readthedocs.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    ‘scrapy.extensions.telnet.TelnetConsole‘: None,
#}
  • 是否开启管道,默认关闭,开启則解除注释
# Configure item pipelines
# See http://scrapy.readthedocs.org/en/latest/topics/item-pipeline.html
#ITEM_PIPELINES = {
#    ‘new_center.pipelines.NewCenterPipeline‘: 300,
#}
  • 设置自动限速,根据Scrapy服务器及爬取的网站的负载自动限制爬取速度,默认关闭,开启需解除注释。
# Enable and configure the AutoThrottle extension (disabled by default)
# See http://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True  # 自动限速的开关
# The initial download delay  
#AUTOTHROTTLE_START_DELAY = 5  # 初始下载延时
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60  # 最大下载延时
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False
  • 启用和配置HTTP缓存
# Enable and configure HTTP caching (disabled by default)
# See http://scrapy.readthedocs.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = ‘httpcache‘
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = ‘scrapy.extensions.httpcache.FilesystemCacheStorage‘

以上是关于scrapy爬虫框架setting模块解析的主要内容,如果未能解决你的问题,请参考以下文章

scrapy按顺序启动多个爬虫代码片段(python3)

scrapy爬虫下载音频文件并储存到本地

scrapy爬虫下载音频文件并储存到本地

scrapy主动退出爬虫的代码片段(python3)

Python网络爬虫_Scrapy框架_2.logging模块的使用

强大精简的爬虫框架Colly,能否取代 Scrapy?