python3 爬虫(urllib+beautifulsoup)beautifulsoup自动检测编码错误
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了python3 爬虫(urllib+beautifulsoup)beautifulsoup自动检测编码错误相关的知识,希望对你有一定的参考价值。
版本:Python3.x
运行系统:win7
编辑器:pycharm
爬取页面:携程的一个页面(韩国首尔6日5晚半自助游·直飞+滑雪场或南怡岛+乐天世界+1天自由活动-【携程旅游】)
#!/usr/bin/env python3 # -*- coding: utf-8 -*- from urllib.request import urlopen from urllib.error import HTTPError from bs4 import BeautifulSoup def getComment(url): try: html = urlopen(url) except HTTPError as e: return None #网页在服务器上不存在,若服务器不存在直接返回None try: soup = BeautifulSoup(html.read(),"lxml") comment = soup.body.find("ul",{"class":"detail_comment_list"}).find("li") except ArithmeticError as e: return None return comment comment = getComment("http://vacations.ctrip.com/grouptravel/p11504202s32.html#ctm_ref=va_hom_s32_prd_p1_l2_2_img") if comment == None: print("comment could not be found") else: comment 1 = comment.get_text() print(comment 1)
但是会输出乱码
解决方案:
(1)使用第三方库requests+beautifulsoup,requests对编码有比较好的处理能力
(2)直接编码检测,发现网页标注是UTF-8写的,但其实是gbk编码的编码
from urllib.request import urlopen
import chardet
a = urlopen(‘http://vacations.ctrip.com/grouptravel/p11504202s32.html#ctm_ref=va_hom_s32_prd_p1_l2_2_img‘).read()
b = chardet.detect(a)
print(b)
#{‘encoding‘: ‘GB2312‘, ‘confidence‘: 0.99}
利用beautifulsoup对网页编码确认
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
from urllib.request import urlopen
from urllib.error import HTTPError
from bs4 import BeautifulSoup
def getComment(url):
try:
html = urlopen(url)
except HTTPError as e:
return None #网页在服务器上不存在,若服务器不存在直接返回None
try:
soup = BeautifulSoup(html.read(),"lxml",from_encoding=‘gbk‘)
comment = soup.body.find("ul",{"class":"detail_comment_list"}).find("li")
except ArithmeticError as e:
return None
return comment
comment = getComment("http://vacations.ctrip.com/grouptravel/p11504202s32.html#ctm_ref=va_hom_s32_prd_p1_l2_2_img")
if comment == None:
print("commmnt could not be found")
else:
comment1 = comment.get_text()
print(comment1)
以上是关于python3 爬虫(urllib+beautifulsoup)beautifulsoup自动检测编码错误的主要内容,如果未能解决你的问题,请参考以下文章
Python3网络爬虫:使用Beautiful Soup爬取小说