python多线程爬取网页

Posted PerilongGideon

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了python多线程爬取网页相关的知识,希望对你有一定的参考价值。

#-*- encoding:utf8 -*-
‘‘‘
Created on 2018年12月25日

@author: Administrator
‘‘‘
from multiprocessing.dummy import Pool as pl
import csv
import requests
from lxml import etree


def spider(url):
    header = {"User-Agent":"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36                (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36"}
    r = requests.get(url=url, headers=header)
    return r.json()

def spider_detail(url):
    resp = spider(url)
    title = resp.get(‘data‘).get(‘title‘)
    print(title)
    content = resp.get(‘data‘).get(‘content‘)
    try:
        title_clear = title.replace(‘|‘, ‘‘).replace(‘?‘, ‘‘)
        content_clear = content.replace(‘</p><p>‘,‘

‘).replace(‘<p>‘,‘‘)
        sel = etree.HTML(content_clear)
        content_clear = sel.xpath(‘string(//*)‘)
        artical_write(title_clear, content_clear)
        print(title_clear)
    except:
        pass
    
def get_all_urls(page_number):
    for i in range(1, page_number + 1):
        url = ‘https://36kr.com/api/search-column/mainsite?per_page=20&page=‘ + str(i)
        resp = spider(url)
        artical_data = resp.get(‘data‘).get(‘items‘)
        for url_data in artical_data:
            number = url_data.get(‘id‘)
            artical_url = ‘https://36kr.com/api/post/‘+ str(number) +‘/next‘
            yield artical_url
    
def artical_write(title, content):
    with open(‘d:/spider_data/11.11/‘ + title + ‘.txt‘, ‘wt‘, encoding=‘utf-8‘) as f:
        f.write(content)

if __name__ == ‘__main__‘:
    # 线程数, 默认为cpu核心数
    pool = pl(4)
    
    # url列表收集
    all_url = []
    for url in get_all_urls(100):
        all_url.append(url)
    
    # 多线程爬取
    pool.map(spider_detail, all_url)
    pool.close()
    pool.join()

  

以上是关于python多线程爬取网页的主要内容,如果未能解决你的问题,请参考以下文章

多线程爬取免费代理ip池 (给我爬)

爬虫实例:多线程,多进程对网页的爬取

Python爬虫深造篇——多线程网页爬取

实现多线程爬取数据并保存到mongodb

8简单的多线程爬取网页数据 并通过xpath解析存到本地

网络爬虫:使用多线程爬取网页链接