xpath实战
Posted sunflying
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了xpath实战相关的知识,希望对你有一定的参考价值。
# 1、爬取58二手房信息
import requests from lxml import etree #需求:爬取58二手房中的房源信息 if __name__ == ‘__main__‘: #爬取到页面源码数据 url ="https://bj.58.com/ershoufang/" # 进行UA伪装 headers = { ‘User-Agent‘: ‘Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0‘ } page_text = requests.get(url = url,headers=headers).text # 数据解析 tree = etree.html(page_text) # 存储的是li标签对象 li_list = tree.xpath(‘//ul[@class="houst-list-wrap"]/li‘) fp = open(‘58.txt‘,‘w‘,encoding=‘utf-8‘) for li in li_list: # ./ 就代表li #局部解析 一个要加上. title = li.xpath(‘./div[2]/h2/a/text()‘)[0] print(title,‘over‘) fp.write(title,‘ ‘)
2、解析下载图片信息 http://pic.netbian.com/4kmeinv/
解决乱码的两种方法:
#1、response.encoding = ‘utf-8‘
#2、通用的处理中文乱码的解决方案
img_name = img_name.encode(‘iso-8859-1‘).decode(‘gbk‘)
import requests from lxml import etree import os #需求:爬取58二手房中的房源信息 if __name__ == ‘__main__‘: #爬取到页面源码数据 url ="http://pic.netbian.com/4kmeinv/" # 进行UA伪装 headers = { ‘User-Agent‘: ‘Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0‘ } response = requests.get(url =url,headers = headers) #response.encoding = ‘utf-8‘ page_text = response.text #数据解析:src的属性值 alt属性 tree = etree.HTML(page_text) li_list = tree.xpath(‘//div[@class="slist"]/ul/li‘) if not os.path.exists(‘./piclibs‘): os.mkdir(‘./piclibs‘) for li in li_list: img_src = ‘http://pic.netbian.com‘+li.xpath(‘./a/img/@src‘)[0] img_name = li.xpath(‘./a/img/@alt‘)[0]+‘.jpg‘ #通用的处理中文乱码的解决方案 img_name = img_name.encode(‘iso-8859-1‘).decode(‘gbk‘) #print(img_name,img_src) #请求图片数据进行持久化存储 img_data = requests.get(url=img_src,headers = headers).content img_path = ‘piclibs/‘+img_name with open(img_path,‘wb‘) as fp: fp.write(img_data) print(img_name,‘下载成功!!!‘)
3、全国城市 历史检测数据 https://www.aqistudy.cn/historydata/
import requests from lxml import etree import os #需求:解析出所有城市名称 if __name__ == ‘__main__‘: #爬取到页面源码数据 url ="https://www.aqistudy.cn/historydata/" # 进行UA伪装 headers = { ‘User-Agent‘: ‘Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/20100101 Firefox/77.0‘ } page_text = requests.get(url =url,headers = headers).text tree = etree.HTML(page_text) #解析热门城市和所有城市对应的a标签 # div[@class="bottom"]/ul/li/a 热门城市a标签的层级关系 # div[@class="bottom"]/ul/div[2]/li/a 全部城市a标签的层级关系 all_city_names = [] a_list = tree.xpath(‘//div[@class="bottom"]/ul/li/a | //div[@class="bottom"]/ul/div[2]/li/a‘) for a in a_list: city_name = a.xpath(‘./text()‘)[0] all_city_names.append(city_name) print(all_city_names,len(all_city_names))
4、获取简历 sc.chinaz.com 免费简历
以上是关于xpath实战的主要内容,如果未能解决你的问题,请参考以下文章
Selenium2+Python3.6实战:定位下拉菜单出错,如何解决?用select或xpath定位。
xPath基本语法规则-Java网络爬虫系统性学习与实战系列