scrapy 设置cookie池

Posted dahu的菜园子

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了scrapy 设置cookie池相关的知识,希望对你有一定的参考价值。

代码已经很详细了,可以直接拿来使用了。

包含了:

  • 从网页获取cookie
  • 存入mongodb
  • 定期删除cookie
  • scrapy中间件对cookie池的取用
#!/usr/bin/python
#coding=utf-8
#__author__=‘dahu‘
#data=2017-
# 
import requests
import time
from pymongo import MongoClient
import cookielib
import urllib2
from bson.objectid import ObjectId
url = https://www.so.com
# url = ‘https://cn.bing.com/translator‘
client = MongoClient(localhost, 27017)
db = client[save_cookie]
collection = db[san60cookie]

def get_header():
    header={
        "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
        "Accept-Encoding": "gzip, deflate, br",
        "Accept-Language": "en-US,en;q=0.8,zh-CN;q=0.6,zh;q=0.4",
        "Cache-Control": "max-age=0",
        "Connection": "keep-alive",
        "Host": "www.so.com",
        "Upgrade-Insecure-Requests": "1",
        "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36",
    }
    return headerdef get_cookie_lib():
    cookie = cookielib.CookieJar()
    handler = urllib2.HTTPCookieProcessor(cookie)
    opener = urllib2.build_opener(handler)
    response = opener.open(url)
    # for item in cookie:
    #     print "%s : %s" % (item.name, item.value)
    cookie_dict = {}
    for cook in cookie:
        cookie_dict[cook.name] = cook.value
    return cookie_dict


def save_cookie_into_mongodb(cookie):
    print insert
    insert_data = {}
    insert_data[cookie] = cookie
    insert_data[insert_time] = time.strftime(%Y-%m-%d %H:%M:%S)
    insert_data[request_url]=url
    insert_data[insert_timestamp] = time.time()
    collection.insert(insert_data)


def delete_timeout_cookie(request_url):
    time_out = 300
    for data in collection.find({request_url:request_url}):
        if (time.time() - data.get(insert_timestamp)) > time_out:
            print delete: %s % data.get(_id)
            collection.delete_one({_id: ObjectId(data.get(_id))})
       #这里有疑问的话可以参考http://api.mongodb.com/python/current/tutorial.html#querying-by-objectid
 
def get_cookie_from_mongodb(): cookies = [data.get(cookie) for data in collection.find()] return cookies if __name__ == __main__: num=0 while 1: if num == 2: print deleting delete_timeout_cookie(url) num = 0 else: cookie = get_cookie_lib() save_cookie_into_mongodb(cookie) num += 1 time.sleep(5)

 

对应的middleware文件,可以写成这样

import random
class CookiesMiddleware(object):
    def process_request(self,request,spider):
        cookie = random.choice(get_cookie_from_mongodb())
        request.cookies = cookie

 

以上是关于scrapy 设置cookie池的主要内容,如果未能解决你的问题,请参考以下文章

Scrapy系列教程------怎样避免被禁

在Scrapy里设置Cookies 要注意一点!

在scrapy中设置IP代理池(手动代理池)

scrapy.Request的旅行

scrapy设置ip池问题

Scrapy框架设置UA池与代理池 񬪨