亿邦动力抓取实例,持续更新

Posted lizhen2020

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了亿邦动力抓取实例,持续更新相关的知识,希望对你有一定的参考价值。

技术图片
# -*- coding: utf-8 -*-
import scrapy
from ybdlspider.items import YbdlspiderItem
import re
class YbSpider(scrapy.Spider):
    name = yb
    allowed_domains = [ebrun.com]
    start_urls = [http://www.ebrun.com/retail/1]#首页
    num=1
    def parse(self, response):#标题和详情页地址
        url_list=response.xpath(//div/a[@eb="com_chan_lcol_fylb"])
        for i in url_list:
            item=YbdlspiderItem( )
            item["title"]=i.xpath("./@title").extract_first()
            item["href"]=i.xpath("./@href").extract_first()

            yield scrapy.Request(item["href"],callback=self.parse_detail,meta={"item":item})
        beforeurl=response.url
        pat1=r"/retail/(d)"
        page=re.search(pat1,beforeurl).group(1)
        page=int(page)+1
        if page<3:#翻页控制
            nexturl="http://www.ebrun.com/retail/"+str(page)
            yield scrapy.Request(nexturl,callback=self.parse)

    def parse_detail(self,response):#详情页内容和发布时间
        item=response.meta["item"]
        item["content"]=response.xpath(//section/article/div[@class="post-text"]//p/text()).extract()
        item["time"]=response.xpath(//html/body/main/section/article/div/p/span[@class="f-right"]).extract_first()
        print(item)
        yield item
        
spider
技术图片
# -*- coding: utf-8 -*-

# Scrapy settings for ybdlspider project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://docs.scrapy.org/en/latest/topics/settings.html
#     https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = ybdlspider

SPIDER_MODULES = [ybdlspider.spiders]
NEWSPIDER_MODULE = ybdlspider.spiders


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = ‘ybdlspider (+http://www.yourdomain.com)‘

# Obey robots.txt rules
ROBOTSTXT_OBEY = True

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16
LOG_LEVEL="WARNING"
# Disable cookies (enabled by default)
#COOKIES_ENABLED = False
USER_AGENT=Mozilla/5.0 (Linux; U; android 8.0.0; zh-CN; MHA-AL00 Build/HUAWEIMHA-AL00) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.2987.108 UCBrowser/12.1.4.994 Mobile Safari/537.36
# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False
# DEFAULT_REQUEST_HEADERS = {
#     ‘User-Agent‘:‘Mozilla/5.0 (Linux; U; Android 8.0.0; zh-CN; MHA-AL00 Build/HUAWEIMHA-AL00) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/57.0.2987.108 UCBrowser/12.1.4.994 Mobile Safari/537.36‘,
#     }
# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   ‘Accept‘: ‘text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8‘,
#   ‘Accept-Language‘: ‘en‘,
#}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    ‘ybdlspider.middlewares.YbdlspiderSpiderMiddleware‘: 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    ‘ybdlspider.middlewares.YbdlspiderDownloaderMiddleware‘: 543,
#}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    ‘scrapy.extensions.telnet.TelnetConsole‘: None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
    ybdlspider.pipelines.YbdlspiderPipeline: 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = ‘httpcache‘
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = ‘scrapy.extensions.httpcache.FilesystemCacheStorage‘
set

 

以上是关于亿邦动力抓取实例,持续更新的主要内容,如果未能解决你的问题,请参考以下文章

回归 | js实用代码片段的封装与总结(持续更新中...)

小程序各种功能代码片段整理---持续更新

[转帖]科创板云计算第一股优刻得上市 募集资金近20亿人民币

可以更新片段而不是创建新实例吗?

Form组件常用校验规则-2(持续更新中~)

如何为 XSLT 代码片段配置 CruiseControl 的 C# 版本?