scrapy飞溅渲染js页面的问题

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了scrapy飞溅渲染js页面的问题相关的知识,希望对你有一定的参考价值。

我对包含动态加载内容的网页有一个抓取问题。我用以下方法启动了启动码头图像:

docker run -p 8050:8050 scrapinghub/splash --disable-private-mode

我的scrapy-splash蜘蛛使用LUA脚本,该脚本应该滚动并返回整页的html

import scrapy
from scrapy_splash import SplashRequest

class MySplashSpider(scrapy.Spider):
    # requires the scrapy-splash docker image running
    name = "psplash" 

    def __init__(self):
        self.domain = 'http://www.phillips.com'
        self.user_agent = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:10.0) Gecko/20100101 Firefox/10.0"
        self.script = """
                        function main(splash)
                            local num_scrolls = 3
                            local scroll_delay = 1.0
                            splash:set_viewport_full()
                            splash:wait(5.0)
                            return splash:html()
                        end
                      """ 
        self.splash_args = {'lua_source': self.script,
                            'ua': self.user_agent
                            }

    def start_requests(self):
        base_url = "https://www.phillips.com/auctions/past/filter/Department=20TH%20CENTURY%20%26%20CONTEMPORARY%20ART!Editions!Latin%20America!Photographs"
        yield SplashRequest(base_url,
                            callback = self.parse_pagination,
                            endpoint = 'execute', 
                            args = self.splash_args
                            )      

    def parse_pagination(self, response):
        print('xxxxxxxxxx', response.xpath("//footer/ul/li[last()-1]/a/text()").extract())
        print('xxxxxxxxxx', response.xpath("//h2/a/@href").extract())

当使用chrome dev工具检查时,我正在为29获取//footer/ul/li[last()-1]/a/text()为什么我没有得到response.xpath的结果:

[
{"response_text": "hello world", "response_xpath_value": []}
]

控制台输出显示没有错误:

2017-12-16 13:05:16 [scrapy.core.engine] INFO: Spider opened
2017-12-16 13:05:16 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
xxxxxxxxxx []
xxxxxxxxxx []
2017-12-16 13:05:21 [scrapy.core.engine] INFO: Closing spider (finished)
2017-12-16 13:05:21 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 986,
 'downloader/request_count': 1,
 'downloader/request_method_count/POST': 1,
 'downloader/response_bytes': 163,
 'downloader/response_count': 1,
 'downloader/response_status_count/200': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 12, 16, 12, 5, 21, 707451),
 'log_count/INFO': 7,
 'log_count/WARNING': 1,
 'response_received_count': 1,
 'scheduler/dequeued': 2,
 'scheduler/dequeued/memory': 2,
 'scheduler/enqueued': 2,
 'scheduler/enqueued/memory': 2,
 'splash/execute/request_count': 1,
 'splash/execute/response_count/200': 1,
 'start_time': datetime.datetime(2017, 12, 16, 12, 5, 16, 816927)}
2017-12-16 13:05:21 [scrapy.core.engine] INFO: Spider closed (finished) 

我在这里想念的是什么?

答案

将响应正文保存在HTML文件中,然后根据您的要求检查是否正在下载整页。如果是,请尝试使用selectors

以上是关于scrapy飞溅渲染js页面的问题的主要内容,如果未能解决你的问题,请参考以下文章

使用scrapy-selenium, chrome-headless抓取动态网页

飞溅问题(d-bus、QSslSocket、libpng)

scrapy 爬取 javscript 动态渲染页面

thymeleaf 片段渲染后重新加载 javascript

使用带有渲染功能的 Vue.js 3 片段

scrapy爬取前端渲染页面