通过扭曲的inlineCallbacks运行Scrapy蜘蛛
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了通过扭曲的inlineCallbacks运行Scrapy蜘蛛相关的知识,希望对你有一定的参考价值。
我有ImportError: No module named 'spiders'
,所以我认为蜘蛛调用发生时没有环境变量。但是我不完全了解如何使它们正常工作。
[基本上,我想运行一些Scrapy Spider,它们将填充数据库,然后我的程序应该进行少量计算。这应该定期进行(例如每分钟)。由于已经在混乱的依赖中扭曲,我决定将其合并。项目结构类似于(简单地):
-Project
|-src
|- __init__.py
|- spiders.py
|-bot.py
在spiders.py中,我有2个单独的Spider,当我在该文件中启动它们时,它们运行良好。但是现在我在bot.py中添加了一些逻辑,并提出了:
from scrapy.crawler import CrawlerRunner
from scrapy.utils.project import get_project_settings
from twisted.internet import task
from twisted.internet import reactor
from twisted.internet.defer import inlineCallbacks, returnValue
from src.spiders import first_spider, second_spider
def do_some_stuff(): pass
if __name__ == '__main__':
runner = CrawlerRunner(get_project_settings())
@inlineCallbacks
def cycle():
yield runner.crawl(first_spider)
yield runner.crawl(second_spider)
returnValue(do_some_stuff())
timeout = 60.0
l = task.LoopingCall(cycle)
l.start(timeout)
reactor.run()
和错误跟踪:
2017-04-21 15:32:26 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.logstats.LogStats',
'scrapy.extensions.telnet.TelnetConsole']
2017-04-21 15:32:26 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-04-21 15:32:26 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
Unhandled error in Deferred:
2017-04-21 15:32:26 [twisted] CRITICAL: Unhandled error in Deferred:
2017-04-21 15:32:26 [twisted] CRITICAL:
Traceback (most recent call last):
File "projectpath/venv/lib/python3.5/site-packages/twisted/internet/defer.py", line 1299, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "projectpath/venv/lib/python3.5/site-packages/twisted/python/failure.py", line 393, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "projectpath/bot.py", line 141, in cycle
yield runner.crawl(first_spider)
ImportError: No module named 'spiders'
更新。导入spiders.py:
import hashlib
import json
import pymongo
import scrapy
from scrapy.crawler import CrawlerRunner
from scrapy.exceptions import DropItem
from scrapy.utils.log import configure_logging
from scrapy.utils.project import get_project_settings
from twisted.internet import reactor
答案
所以您的项目结构是
.
├── bot.py
└── src
├── __init__.py
└── spiders.py
要运行它,您应按以下方式使用PYTHONPATH
PYTHONPATH
这是一个基于功能的基于一文件的scrapy项目,它将每60秒执行一次循环scrape。
$ PYTHONPATH=. python3 bot.py
运行
# scraper.py
import datetime
import json
import scrapy
from scrapy.crawler import CrawlerRunner
from scrapy.item import Item, Field
from twisted.internet import reactor
from twisted.internet import task
from twisted.internet.defer import inlineCallbacks
class JsonWriterPipeline(object):
def open_spider(self, spider):
self.file = open(spider.settings['JSON_FILE'], 'a')
def close_spider(self, spider):
self.file.close()
def process_item(self, item, spider):
line = json.dumps(dict(item)) + "
"
self.file.write(line)
return item
class QuoteItem(Item):
text = Field()
author = Field()
tags = Field()
spider = Field()
class QuotesSpiderOne(scrapy.Spider):
name = "quotes1"
def start_requests(self):
urls = ['http://quotes.toscrape.com/page/1/', ]
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
for quote in response.css('div.quote'):
item = QuoteItem()
item['text'] = quote.css('span.text::text').get()
item['author'] = quote.css('small.author::text').get()
item['tags'] = quote.css('div.tags a.tag::text').getall()
item['spider'] = self.name
yield item
class QuotesSpiderTwo(scrapy.Spider):
name = "quotes2"
def start_requests(self):
urls = ['http://quotes.toscrape.com/page/2/', ]
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
for quote in response.css('div.quote'):
item = QuoteItem()
item['text'] = quote.css('span.text::text').get()
item['author'] = quote.css('small.author::text').get()
item['tags'] = quote.css('div.tags a.tag::text').getall()
item['spider'] = self.name
yield item
def do_some_stuff():
print(datetime.datetime.now().strftime("%H:%M:%S"))
@inlineCallbacks
def cycle():
yield runner.crawl(QuotesSpiderOne)
yield runner.crawl(QuotesSpiderTwo)
return do_some_stuff()
if __name__ == '__main__':
settings = dict()
settings['USER_AGENT'] = 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
settings['HTTPCACHE_ENABLED'] = True
settings['JSON_FILE'] = 'items.jl'
settings['ITEM_PIPELINES'] = dict()
settings['ITEM_PIPELINES']['__main__.TextCleaningPipeline'] = 800
runner = CrawlerRunner(settings=settings)
timeout = 60.0
l = task.LoopingCall(cycle)
l.start(timeout)
reactor.run()
一个文件抓取项目的一个优点是易于生成$ python3 scraper.py
二进制文件。
以上是关于通过扭曲的inlineCallbacks运行Scrapy蜘蛛的主要内容,如果未能解决你的问题,请参考以下文章