高性能的异步爬虫

Posted wanglan

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了高性能的异步爬虫相关的知识,希望对你有一定的参考价值。

线程池(适当使用)

import re
import requests
from lxml import etree
from multiprocessing.dummy import Pool
import random
headers = {
    User-Agent:Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (Khtml, like Gecko) Chrome/73.0.3683.86 Safari/537.36
}

def request_video(url):
    return requests.get(url=url,headers=headers).content


def saveVideo(data):
    name = str(random.randint(0,9999))+.mp4
    with open(name,wb) as fp:
        fp.write(data)
        print(name,下载存储成功!!!)

url = https://www.pearvideo.com/category_1
page_text = requests.get(url=url,headers=headers).text

tree = etree.HTML(page_text)
li_list = tree.xpath(//ul[@id="listvideoListUl"]/li)
#实例化一个线程池对象
pool = Pool(4)
video_url_list = [] #所有的视频连接
for li in li_list:
    detail_url = https://www.pearvideo.com/+li.xpath(./div/a/@href)[0]
    detail_page_text = requests.get(url=detail_url,headers=headers).text
    ex = srcUrl="(.*?)",vdoUrl=
    video_url = re.findall(ex,detail_page_text,re.S)[0]
    video_url_list.append(video_url)
#异步的获取4个视频的二进制数据
video_data_list = pool.map(request_video,video_url_list)

#进行视频的持久化存储
pool.map(saveVideo,video_data_list)

单线程+异步协程(推荐)

  • event_loop:事件循环,相当于一个无限循环,我们可以把一些函数注册到这个事件循环上,当满足某些条件的时候,函数就会被循环执行。程序是按照设定的顺序从头执行到尾,运行的次数也是完全按照设定。当在编写异步程序时,必然其中有部分程序的运行耗时是比较久的,需要先让出当前程序的控制权,让其在背后运行,让另一部分的程序先运行起来。当背后运行的程序完成后,也需要及时通知主程序已经完成任务可以进行下一步操作,但这个过程所需的时间是不确定的,需要主程序不断的监听状态,一旦收到了任务完成的消息,就开始进行下一步。loop就是这个持续不断的监视器。

  • coroutine:中文翻译叫协程,在 Python 中常指代为协程对象类型,我们可以将协程对象注册到事件循环中,它会被事件循环调用。我们可以使用 async 关键字来定义一个方法,这个方法在调用时不会立即被执行,而是返回一个协程对象。

  • task:任务,它是对协程对象的进一步封装,包含了任务的各个状态。

  • future:代表将来执行或还没有执行的任务,实际上和 task 没有本质区别。

  • 另外我们还需要了解 async/await 关键字,它是从 Python 3.5 才出现的,专门用于定义协程。其中,async 定义一个协程,await 用来挂起阻塞方法的执行。

基本使用

#基本使用
import asyncio
async def hello(name):
    print(hello to :,name)
#获取了一个协程对象
c = hello(bobo)

#创建一个事件循环对象
loop = asyncio.get_event_loop()

#将协程对象注册到事件循环中,然后启动事件循环对象
loop.run_until_complete(c)

task的使用

#task的使用
import asyncio
async def hello(name):
    print(hello to :,name)

#获取了一个协程对象
c = hello(bobo)
#创建一个事件循环对象
loop = asyncio.get_event_loop()
#就协程进行进一步的封装,封装到了task对象中
task = loop.create_task(c)
print(task)
loop.run_until_complete(task)
print(task)

future

#future
import asyncio
async def hello(name):
    print(hello to :,name)
c
= hello(bobo) task = asyncio.ensure_future(c) loop.run_until_complete(task)

绑定回调(task)

import asyncio
def callback(task):
    print(i am callback:,task.result())

async def hello(name):
    print(hello to :,name)
    return name

c = hello(bobo)

task = asyncio.ensure_future(c)
#给任务对象绑定一个回调函数
task.add_done_callback(callback)
loop.run_until_complete(task)

多任务异步协程

import requests
async def get_page(url):
    print(正在下载:,url)
    #之所以没有实现异步操作,原因是因为requests模块是一个非异步的模块
    response = requests.get(url=url)
    print(响应数据:,response.text)
    print(下载成功:,url)
start = time.time()
urls = [
    http://127.0.0.1:5000/bobo,
    http://127.0.0.1:5000/jay,
    http://127.0.0.1:5000/tom
]
tasks = []
loop = asyncio.get_event_loop()
for url in urls:
    c = get_page(url)
    task = asyncio.ensure_future(c)
    tasks.append(task)
loop.run_until_complete(asyncio.wait(tasks))
print(总耗时:,time.time()-start)

asyncio.sleep()

import asyncio
async def request(url):
    print(正在下载:,url)
#     sleep(2) #非异步模块的代码:在此处如果存在非异步操作代码,则会彻底让asyncio失去异步的效果
    await asyncio.sleep(2)
    print(下载成功:,url)
urls = [
    www.baidu.com,
    www.taobao.com,
    www.sogou.com
]
start = time.time()
loop = asyncio.get_event_loop()
tasks = [] #任务列表,放置多个任务对象
for url in urls:
    c = request(url)
    task = asyncio.ensure_future(c)
    tasks.append(task)
    
#将多个任务对象对应的列表注册到事件循环中
loop.run_until_complete(asyncio.wait(tasks))
print(总耗时:,time.time()-start)

多任务异步操作应用到爬虫中

  • 环境安装:pip install aiohttp  支持异步的网络请求的模块
import aiohttp
import asyncio

async def get_page(url):
    async with aiohttp.ClientSession() as session:
        async with await session.get(url=url) as response:
            page_text = await response.text() #read()  json()
            print(page_text)
start = time.time()
urls = [
    http://127.0.0.1:5000/bobo,
    http://127.0.0.1:5000/jay,
    http://127.0.0.1:5000/tom,
    http://127.0.0.1:5000/bobo,
    http://127.0.0.1:5000/jay,
    http://127.0.0.1:5000/tom,
    http://127.0.0.1:5000/bobo,
    http://127.0.0.1:5000/jay,
    http://127.0.0.1:5000/tom
]
tasks = []
loop = asyncio.get_event_loop()
for url in urls:
    c = get_page(url)
    task = asyncio.ensure_future(c)
    tasks.append(task)
loop.run_until_complete(asyncio.wait(tasks))
print(总耗时:,time.time()-start)

实现数据解析---任务的绑定回调机制

import aiohttp
import asyncio
#回调函数:解析响应数据
def callback(task):
    print(this is callback())
    #获取响应数据
    page_text = task.result()
    print(在回调函数中,实现数据解析)
    
async def get_page(url):
    async with aiohttp.ClientSession() as session:
        async with await session.get(url=url) as response:
            page_text = await response.text() #read()  json()
#             print(page_text)
            return page_text
start = time.time()
urls = [
    http://127.0.0.1:5000/bobo,
    http://127.0.0.1:5000/jay,
    http://127.0.0.1:5000/tom,
    http://127.0.0.1:5000/bobo,
    http://127.0.0.1:5000/jay,
    http://127.0.0.1:5000/tom,
    http://127.0.0.1:5000/bobo,
    http://127.0.0.1:5000/jay,
    http://127.0.0.1:5000/tom
]
tasks = []
loop = asyncio.get_event_loop()
for url in urls:
    c = get_page(url)
    task = asyncio.ensure_future(c)
    #给任务对象绑定回调函数用于解析响应数据
    task.add_done_callback(callback)
    tasks.append(task)
loop.run_until_complete(asyncio.wait(tasks))
print(总耗时:,time.time()-start)

 

以上是关于高性能的异步爬虫的主要内容,如果未能解决你的问题,请参考以下文章

高性能异步爬虫概述

爬虫-高性能异步爬虫

高性能异步爬虫

高性能异步爬虫

异步高性能爬虫

高性能异步爬虫