Splash对接Scrapy

Posted lokvahkoor

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Splash对接Scrapy相关的知识,希望对你有一定的参考价值。

1. 安装Splash:

  1. 安装docker
  2. docker拉取Splash镜像:docker pull scrapinghub/splash
  3. 启动Splash:docker run -p 8050:8050 scrapinghub/splash

2. 安装并配置scrapy-splash:https://github.com/scrapy-plugins/scrapy-splash

在settings.py中增加:

SPLASH_URL = 'http://Splash的部署地址:8050/'
DOWNLOADER_MIDDLEWARES = {
    'scrapy_splash.SplashCookiesMiddleware': 723,
    'scrapy_splash.SplashMiddleware': 725,
    'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware': 810,
}
SPIDER_MIDDLEWARES = {
    'scrapy_splash.SplashDeduplicateArgsMiddleware': 100,
}
DUPEFILTER_CLASS = 'scrapy_splash.SplashAwareDupeFilter'
HTTPCACHE_STORAGE = 'scrapy_splash.SplashAwareFSCacheStorage'

3. Spider代码

SplashRequest代替scrapy.Request

yield SplashRequest(url, self.parse_result,  # 第一个参数是要请求的url,第二个参数是回调函数
    args={  # 常用于传递Lua脚本
        "lua_source": """
                    splash:set_user_agent("...")
                    assert(splash:go(args.url))
                    assert(splash:wait(args.time))  --注意脚本中可以用args.*来接收外界参数
                    return {html = splash:html()}  --返回html
                    """
        “time”: time  # lua脚本参数从这里传入
    },
    endpoint='run', # 默认为render.html,常用的是’execute’ 和 ‘run’(一般用run),用来执行脚本(run和execute的差别在于:run的脚本只需要包含main函数里的内容就可以了,就像上面的示例代码一样)
)

完整Spider示例:

import scrapy
from scrapy_splash import SplashRequest


class ExampleSpider(scrapy.Spider):
    name = 'connect_splash'
    
    def start_requests(self):
        url = 'http://www.baidu.com'
        script = """
        assert(splash:go(args.url))
        assert(splash:wait(args.wait))
        return {html = splash:html()}
        """
        yield SplashRequest(url, self.parse, endpoint='run', args={'lua_source':script,'wait':3})
        
    def parse(self, response):
        from scrapy.shell import inspect_response
        inspect_response(response, self)
        pass

以上是关于Splash对接Scrapy的主要内容,如果未能解决你的问题,请参考以下文章

Scrapy中的splash的安装应用

配置scrapy-splash+python爬取医院信息(利用了scrapy-splash)

芝麻HTTP:Scrapy-Splash的安装

爬虫进阶-JS自动渲染Scrapy_splash组件的使用

scrapy_splash模块解析动态js

scrapy, splash, lua, 按钮点击