scrapy之parallel

Posted 王将军之武库

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了scrapy之parallel相关的知识,希望对你有一定的参考价值。

Limiting Parallelism
技术分享图片
技术分享图片jcalderone
This blog has moved! Read this post and its comments at its new home. Concurrency can be a great way to speed things up, but what happens when you have too much concurrency? Overloading a system or a network can be detrimental to performance. Often there is a peak in performance at a particular level of concurrency. Executing a particular number of tasks in parallel will be easier than ever with Twisted 2.5 and Python 2.5:
from twisted.internet import defer, task

def parallel(iterable, count, callable, *args, **named):
    coop = task.Cooperator()
    work = (callable(elem, *args, **named) for elem in iterable)
    return defer.DeferredList([coop.coiterate(work) for i in xrange(count)])
Here‘s an example of using this to save the contents of a bunch of URLs which are listed one per line in a text file, downloading at most fifty at a time:
from twisted.python import log
from twisted.internet import reactor
from twisted.web import client

def download((url, fileName)):
    return client.downloadPage(url, file(fileName, ‘wb‘))

urls = [(url, str(n)) for (n, url) in enumerate(file(‘urls.txt‘))]
finished = parallel(urls, 50, download)
finished.addErrback(log.err)
finished.addCallback(lambda ign: reactor.stop())
reactor.run()

[Edit: The original generator expression in this post was of the form ((yield foo()) for x in y). The yield here is completely superfluous, of course, so I have removed it.]

from twisted.internet import defer, reactor, task
l=[3,4,5,6]
def f(a):
    print a
work = (f(elem) for elem in l)
for i in range(3):
    work.next()

coop = task.Cooperator()
#work = (callable(elem, *args, **named) for elem in iterable)
d=[coop.coiterate(work) for _ in range(5)]
print d

[<Deferred at 0x1aa0c88 waiting on Deferred at 0x1aa0d50>, <Deferred at 0x1aa0dc8 waiting on Deferred at 0x1aa0e90>, <Deferred at 0x1aa0f30 waiting on Deferred at 0x1aa4030>, <Deferred at 0x1aa40d0 waiting on Deferred at 0x1aa4198>, <Deferred at 0x1aa4238 waiting on Deferred at 0x1aa4300>]

以上是关于scrapy之parallel的主要内容,如果未能解决你的问题,请参考以下文章

scrapy按顺序启动多个爬虫代码片段(python3)

Python之Scrapy安装

c# 并发编程系列之三:使用 Parallel 开始第1个多线程编码

走近代码之Python--爬虫框架Scrapy

python爬虫scrapy之scrapy终端(Scrapy shell)

JVM之Parallel Scavenge收集器