笔记-scrapy-Request/Response
Posted 木林森__
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了笔记-scrapy-Request/Response相关的知识,希望对你有一定的参考价值。
笔记-scrapy-Request/Response
1. 简介
Scrapy使用Request和Response来爬取网站。
2. request
class scrapy.http.Request(url [,callback,method =‘GET‘,headers,body,cookies,meta,encoding =‘utf-8‘,priority = 0,dont_filter = False,errback,flags ] )
参数说明:
url (string):the URL of this request
callback (callable):the function that will be called with the response of this request (once its downloaded) as its first parameter. For more information see Passing additional data to callback functions below. If a Request doesn’t specify a callback, the spider’s parse() method will be used. Note that if exceptions are raised during processing, errback is called instead.
回调函数:将这个请求的响应(一旦下载完成)作为第一个参数调用的函数,如果请求没有指定回调,则将使用蜘蛛的parse()方法。请注意,如果在处理期间引发异常,则会调用errback。
method (string):the HTTP method of this request. Defaults to ‘GET‘.
meta (dict):the initial values for the Request.meta attribute. If given, the dict passed in this parameter will be shallow copied.参数传递用,注意,浅拷贝。
body (str or unicode):the request body. If a unicode is passed, then it’s encoded to str using the encoding passed (which defaults to utf-8). If body is not given, an empty string is stored. Regardless of the type of this argument, the final value stored will be a str (never unicode or None).
headers (dict):http头部。
cookies (dict or list):
可以使用两种形式发送。
字典型:
request_with_cookies = Request(url="http://www.example.com",
cookies={‘currency‘: ‘USD‘, ‘country‘: ‘UY‘})
字典列表型:
request_with_cookies = Request(url="http://www.example.com",
cookies=[{‘name‘: ‘currency‘,
‘value‘: ‘USD‘,
‘domain‘: ‘example.com‘,
‘path‘: ‘/currency‘}])
The latter form allows for customizing the domain and path attributes of the cookie. This is only useful if the cookies are saved for later requests.
When some site returns cookies (in a response) those are stored in the cookies for that domain and will be sent again in future requests. That’s the typical behaviour of any regular web browser. However, if, for some reason, you want to avoid merging with existing cookies you can instruct Scrapy to do so by setting the dont_merge_cookies key to True in the Request.meta.
不合并cookie示例:
request_with_cookies = Request(url="http://www.example.com",
cookies={‘currency‘: ‘USD‘, ‘country‘: ‘UY‘},
meta={‘dont_merge_cookies‘: True})
encoding (string):(defaults to ‘utf-8‘).
priority (int):请求优先级,目前没用过;
dont_filter (boolean):表示这个请求不应该被调度器过滤。当您想多次执行相同的请求时使用此选项,以忽略重复过滤器。小心使用它,否则你将进入爬行循环。默认为False。errback (callable): 如果在处理请求时引发任何异常,将会调用该函数
flags (list) – Flags sent to the request, can be used for logging or similar purposes.
该类还包含一些属性:
url 此请求的URL。此属性为转义的URL,可能与构造函数中传递的URL不同。属性是只读的。更改使用URL replace()。
method 表示请求中的HTTP方法,大写。例如:"GET","POST","PUT"
headers 一个包含请求头文件的类似字典的对象。
body 包含请求主体的str。属性是只读的。更改使用body. replace()。
meta 包含此请求的任意元数据的字典。对于新的请求是空的,通常由不同的Scrapy组件(扩展,中间件等)填充。
copy() 返回一个新请求,它是此请求的副本。
2.1. meta:附加数据传递
使用meta参数传递数据
def parse_page1(self, response):
item = MyItem()
item[‘main_url‘] = response.url
request = scrapy.Request("http://www.example.com/some_page.html",
callback=self.parse_page2)
request.meta[‘item‘] = item
yield request
def parse_page2(self, response):
item = response.meta[‘item‘]
item[‘other_url‘] = response.url
yield item
meta有一些官方指定键值用来对请求进行处理:
download_timeout 下载器在超时之前等待的时间(以秒为单位)
2.2. errbacks:异常处理
import scrapy
from scrapy.spidermiddlewares.httperror import HttpError
from twisted.internet.error import DNSLookupError
from twisted.internet.error import TimeoutError, TCPTimedOutError
class ErrbackSpider(scrapy.Spider):
name = "errback_example"
start_urls = [
"http://www.httpbin.org/", # HTTP 200 expected
"http://www.httpbin.org/status/404", # Not found error
"http://www.httpbin.org/status/500", # server issue
"http://www.httpbin.org:12345/", # non-responding host, timeout expected
"http://www.httphttpbinbin.org/", # DNS error expected
]
def start_requests(self):
for u in self.start_urls:
yield scrapy.Request(u, callback=self.parse_httpbin,
errback=self.errback_httpbin,
dont_filter=True)
def parse_httpbin(self, response):
self.logger.info(‘Got successful response from {}‘.format(response.url))
# do something useful here...
def errback_httpbin(self, failure):
# log all failures
self.logger.error(repr(failure))
# in case you want to do something special for some errors,
# you may need the failure‘s type:
if failure.check(HttpError):
# these exceptions come from HttpError spider middleware
# you can get the non-200 response
response = failure.value.response
self.logger.error(‘HttpError on %s‘, response.url)
elif failure.check(DNSLookupError):
# this is the original request
request = failure.request
self.logger.error(‘DNSLookupError on %s‘, request.url)
elif failure.check(TimeoutError, TCPTimedOutError):
request = failure.request
self.logger.error(‘TimeoutError on %s‘, request.url)
3. response
classscrapy.http.Response(url [,status = 200,headers = None,body = b‘‘,flags = None,request = None ])
参数:
url(字符串) - 此响应的URL
状态(整数) - 响应的HTTP状态。一般为200。
标题(字典) - 这个响应的标题。字典值可以是字符串(对于单值标题)或列表(对于多值标题)。
body (bytes) - 响应主体。注意解码
标志(列表) - 是包含Response.flags属性初始值的列表 。如果给出,列表将被浅拷贝。
请求(Request对象) - Response.request属性的初始值。这代表了Request产生这个回应的那个。
response还有一些子类,但一般情况下不会使用,不予讨论。
以上是关于笔记-scrapy-Request/Response的主要内容,如果未能解决你的问题,请参考以下文章