python爬虫-1w+套个人简历模板爬取

Posted Gendan

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了python爬虫-1w+套个人简历模板爬取相关的知识,希望对你有一定的参考价值。

import requests # 发送请求
from lxml import etree # 数据解析
import time # 线程暂停,怕封ip
import os # 创建文件夹

由于目标网站更新了反爬虫机制,简单的UA伪装不能满足我们的需求,所有对整个消息头进行了伪装

headers = {

\'Accept\':        \'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8\',
\'Accept-Encoding\':
    \'gzip, deflate, br\',
\'Accept-Language\':
    \'zh-CN,zh;q=0.8,zh-TW;q=0.7,zh-HK;q=0.5,en-US;q=0.3,en;q=0.2\',
\'Cache-Control\':
    \'max-age=0\',
\'Connection\':
    \'keep-alive\',
\'Cookie\':
    \'__gads=undefined; Hm_lvt_aecc9715b0f5d5f7f34fba48a3c511d6=1614145919,1614755756; \'        \'UM_distinctid=177d2981b251cd-05097031e2a0a08-4c3f217f-144000-177d2981b2669b; \'
    \'sctj_uid=ccf8a73d-036c-78e4-6b1d-6035e961b0d3; \'
    \'CNZZDATA300636=cnzz_eid%3D1737029801-1614143206-%26ntime%3D1614759211; \'        \'Hm_lvt_398913ed58c9e7dfe9695953fb7b6799=1614145927,1614755489,1614755737; \'        \'__gads=ID=af6dc030f3c0029f-226abe1136c600e4:T=1614760491:RT=1614760491:S=ALNI_MZAA0rXz7uNmNn6qnuj5BPP7heStw; \'
    \'ASP.NET_SessionId=3qd454mfnwsqufegavxl5lbm; Hm_lpvt_398913ed58c9e7dfe9695953fb7b6799=1614760490; \'
    \'bbsmax_user=ce24ea68-9f80-42e3-8d4f-53b13b13c719; avatarId=a034b11b-abc9-4bfd-a8b2-bdf7fef644bc-; \'
    \'Hm_lpvt_aecc9715b0f5d5f7f34fba48a3c511d6=1614756087\',
\'Host\':
    \'sc.chinaz.com\',
\'If-None-Match\':
    \'\',
\'Referer\':
    \'https://sc.chinaz.com/jianli/free.html\',
\'Upgrade-Insecure-Requests\':
    \'1\',
\'User-Agent\':
    \'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:86.0) Gecko/20100101 Firefox/86.0\',

}

如果该文件夹不存在,则创建文件夹

if not os.path.exists(\'./moban\'):

os.mkdir(\'./moban\')

for i in range(1, 701): # 预计可爬700*20套简历模板

print(f"正准备爬取第{i}页简历模板")
print("怕封ip,操作暂停中......")  # 操作暂停提示语
time.sleep(15)  # 每获取一个列表页暂停15s,一个列表页有20分简历模板的链接
url = f\'https://sc.chinaz.com/jianli/free_{str(i)}.html\'  # 设置相应的路由i
try:  # 异常处理
    response = requests.get(url=url, headers=headers)  # 获取响应
except Exception as e:  # 给异常取名为e
    print(e)  # 打印异常名称
    print(\'连接失败,选择跳过!!!\')  # 连不上就不要连了,头铁容易出事
    print("怕封ip,[外汇跟单](https://www.gendan5.com/)获取列表页操作暂停中......")  # 操作暂停提示语
    time.sleep(5)  # 每出现一次异常暂停5s
    continue  # 跳过本次循环
response.encoding = \'utf-8\'  # 中文编码为utf-8
page = response.text  # 获取响应的文本数据
tree = etree.HTML(page)  # 用etree进行数据解析
a_list = tree.xpath("//div[@class=\'box col3 ws_block\']/a")  # 用xpath提取目标内容形成20份一起的列表
for a in a_list:
    resume_href = \'https:\' + a.xpath(\'./@href\')[0]  # 根据爬取的链接设置新的网页
    resume_name = a.xpath(\'./img/@alt\')[0]  # 爬取名字,并对列表进行切片取第一个
    resume_name = resume_name.strip()  # 去掉首尾的空格
    try:
        resume_response = requests.get(url=resume_href, headers=headers)  # 进入简历模板详情页面
    except Exception as e:
        print(e)
        print(\'连接失败,选择跳过!!!\')
        print("怕封ip,获取个人简历详情页操作暂停中......")
        time.sleep(5)
        continue
    resume_response.encoding = \'utf-8\'  # 中文编码为utf-8
    resume_page = resume_response.text  # 获取响应的文本数据
    resume_tree = etree.HTML(resume_page)  # 用etree进行数据解析
    resume_link = resume_tree.xpath(\'//ul[@class="clearfix"]/li/a/@href\')[0]  # 用xpath提取目标内容的下载链接
    try:
        download = requests.get(url=resume_link, headers=headers).content  # 获取二进制数据
    except Exception as e:
        print(e)
        print(\'连接失败,选择跳过!!!\')
        print("怕封ip,下载个人简历操作暂停中......")
        time.sleep(5)
        continue
    download_path = \'./moban/\' + resume_name + \'.rar\'  # 设置保存路径以及文件名称
    with open(download_path, \'wb\') as fp:  # 设置文件制作,以二进制形式
        fp.write(download)  # 保存文件
        print(resume_name, \'下载成功!!!\')  # 下载成功提示语

以上是关于python爬虫-1w+套个人简历模板爬取的主要内容,如果未能解决你的问题,请参考以下文章

scrapy按顺序启动多个爬虫代码片段(python3)

如何用30行代码爬取Google Play 100万个App的数据

如何用30行代码爬取Google Play 100万个App的数据

scrapy主动退出爬虫的代码片段(python3)

Python爬虫项目实战—全站 950 套美女写真套图爬虫下载

Python爬虫入门教程 4-100 美空网未登录图片爬取