使用scrapy爬取知乎图片

Posted cuirenlao

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了使用scrapy爬取知乎图片相关的知识,希望对你有一定的参考价值。

settings.py

# -*- coding: utf-8 -*-

# Scrapy settings for zhihutupian project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     https://doc.scrapy.org/en/latest/topics/settings.html
#     https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#     https://doc.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = zhihutupian

SPIDER_MODULES = [zhihutupian.spiders]
NEWSPIDER_MODULE = zhihutupian.spiders


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = ‘zhihutupian (+http://www.yourdomain.com)‘
USER_AGENT="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.149 Safari/537.36"
# Obey robots.txt rules
ROBOTSTXT_OBEY = False
#LOG_LEVEL = "ERROR"

IMAGES_STORE = ./imgsLib‘  #自动创建文件夹

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   ‘Accept‘: ‘text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8‘,
#   ‘Accept-Language‘: ‘en‘,
#}

# Enable or disable spider middlewares
# See https://doc.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
#    ‘zhihutupian.middlewares.ZhihutupianSpiderMiddleware‘: 543,
#}

# Enable or disable downloader middlewares
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
#DOWNLOADER_MIDDLEWARES = {
#    ‘zhihutupian.middlewares.ZhihutupianDownloaderMiddleware‘: 543,
#}

# Enable or disable extensions
# See https://doc.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
#    ‘scrapy.extensions.telnet.TelnetConsole‘: None,
#}

# Configure item pipelines
# See https://doc.scrapy.org/en/latest/topics/item-pipeline.html
ITEM_PIPELINES = {
   zhihutupian.pipelines.ImgproPipeline: 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = ‘httpcache‘
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = ‘scrapy.extensions.httpcache.FilesystemCacheStorage‘

 

pipelines.py

# -*- coding: utf-8 -*-

# Define your item pipelines here
#
# Don‘t forget to add your pipeline to the ITEM_PIPELINES setting
# See: https://doc.scrapy.org/en/latest/topics/item-pipeline.html


from scrapy.pipelines.images import ImagesPipeline
import scrapy
class ImgproPipeline(ImagesPipeline):
    #ImgproPipeline
    def get_media_requests(self,item,info):   #注意此处使用的是get_media_requests,注意末尾的s
        img_src = item[img_src]
        print(img_src)
        yield scrapy.Request(url=img_src,meta={item:item})

    def file_path(self,request,response=None,info=None):
        img_name=request.meta[item][img_name]
        
        return img_name

    def item_completed(self,request,item,info):
        return item

items.py

# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# https://doc.scrapy.org/en/latest/topics/items.html

import scrapy


class ZhihutupianItem(scrapy.Item):

    # define the fields for your item here like:
    img_name = scrapy.Field()
    img_src = scrapy.Field()
    

zhihu.py

# -*- coding: utf-8 -*-
import scrapy
from zhihutupian.items import ZhihutupianItem

class ZhihuSpider(scrapy.Spider):
    name = zhihu
    #allowed_domains = [‘www.zhihu.com‘]
    start_urls = [https://www.zhihu.com/question/xxxxxx]
    i = 0
    def parse(self, response):
        div_list = response.xpath("//figure")    #解析到所有图片所在标签
        for img_src in div_list:

            img_name = str(self.i) + .jpg
            src_div = img_src.xpath("./img/@data-original").extract_first()   
            # print(src_div)
            item = ZhihutupianItem()
            item[img_name] = img_name
            item[img_src] = src_div
            #print(item[img_name])
            #print(item[img_src])
            self.i += 1
            yield item

 

imddlewares.py未做修改  

注:利用此

 

以上是关于使用scrapy爬取知乎图片的主要内容,如果未能解决你的问题,请参考以下文章

爬虫实战--利用Scrapy爬取知乎用户信息

Scrapy爬取知乎用户信息

python scrapy简单爬虫记录(实现简单爬取知乎)

Python爬虫实战,Scrapy实战,爬取知乎表情包

scrapy 爬取知乎问题答案 ,并异步写入数据库(mysql)

利用 Scrapy 爬取知乎用户信息