scrapy爬取西刺网站ip

Posted 道高一尺

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了scrapy爬取西刺网站ip相关的知识,希望对你有一定的参考价值。

# scrapy爬取西刺网站ip
# -*- coding: utf-8 -*-
import scrapy

from xici.items import XiciItem


class XicispiderSpider(scrapy.Spider):
    name = "xicispider"
    allowed_domains = ["www.xicidaili.com/nn"]
    start_urls = [http://www.xicidaili.com/nn/]

    def parse(self, response):
        item = XiciItem()
        for each in response.css(#ip_list tr):
            ip = each.css(td:nth-child(2)::text).extract_first()
            port = each.css(td:nth-child(3)::text).extract_first()
            if ip:
                ip_port = ip + : + port
                item[ip_port] = ip_port
                yield item
import pymongo

class XiciPipeline(object):

    collection_name = scrapy_items

    def __init__(self, mongo_uri, mongo_db):
        self.mongo_uri = mongo_uri
        self.mongo_db = mongo_db
    #这里的from经常拼错啊
    @classmethod
    def from_crawler(cls, crawler):
        return cls(
            mongo_uri=crawler.settings.get(MONGO_URI),
            mongo_db=crawler.settings.get(MONGO_DB)
        )

    def open_spider(self, spider):
        self.client = pymongo.MongoClient(self.mongo_uri)
        self.db = self.client[self.mongo_db]

    def close_spider(self, spider):
        self.client.close()

    def process_item(self, item, spider):
        self.db[self.collection_name].insert(dict(item))
        return item

 

以上是关于scrapy爬取西刺网站ip的主要内容,如果未能解决你的问题,请参考以下文章

Python3爬虫Scrapy使用IP代理池和随机User-Agent

爬取拉钩全站的职位信息

scrapy按顺序启动多个爬虫代码片段(python3)

Java实现Ip代理池

scrapy主动退出爬虫的代码片段(python3)

Python Scrapy框架