BeautifulSoup
Posted xwjhyy
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了BeautifulSoup相关的知识,希望对你有一定的参考价值。
BeautifulSoup库
BeautifulSoup简介
Beautiful Soup是python的一个库,最主要的功能是从网页抓取数据。官方解释如下:
Beautiful Soup提供一些简单的、python式的函数用来处理导航、搜索、修改分析树等功能。它是一个工具箱,通过解析文档为用户提供需要抓取的数据,因为简单,所以不需要多少代码就可以写出一个完整的应用程序。
Beautiful Soup自动将输入文档转换为Unicode编码,输出文档转换为utf-8编码。你不需要考虑编码方式,除非文档没有指定一个编码方式,这时,Beautiful Soup就不能自动识别编码方式了。然后,你仅仅需要说明一下原始编码方式就可以了。
Beautiful Soup已成为和lxml、html6lib一样出色的python解释器,为用户灵活地提供不同的解析策略或强劲的速度。
BeautifulSoup支持Python标准库中的HTML解析器,还支持一些第三方的解析器,如果我们不安装它,则 Python 会使用 Python默认的解析器,lxml 解析器更加强大,速度更快,推荐使用lxml 解析器。
二. 基本语法
1. 节点选择器:基本用法
html = """
<html><head><title>The Dormouse's story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse's story</b></p>
<p class="story>Once upon a time there are three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1"><!-- Elsie -->/a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
假如想要获取上述html中的title节点及其文本内容,请看以下语法:
引入并初始化beautifulsoup
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
初始化对于一些不标准的html,可以自动更正格式,如补全标签等等
获取title节点,查看它的类型
print(soup.title)
print(type(soup.title))
#输出结果
<title>The Dormouse's story</title>
<class 'bs4.element.Tag'>
获取到的title节点,正是节点加文本内容
获取title节点文本内容
print(soup.title.string)
#输出结果
The Dormouse's story
如果想获取其他信息,比如节点的名字及其属性,这些也比较方便
获取title节点的名字
print(soup.title.name)
#输出结果
title
获取p节点的多个属性和某一属性
p节点有多个属性,比如class和name等,可以调用attrs获取所有属性
#获取多个属性
print(soup.p.attrs)
#输出结果:
{'class': ['title'], 'name': 'dromouse'}
#获取某个属性:方法一
print(soup.p.attrs['name']
#输出结果:
dromouse
#获取某个属性:方法二
print(soup.p['name']
#输出结果:
dromouse
#获取单个属性需要注意的地方
print(soup.p['class'])
#输出结果:
['title']
需要注意的是,有的返回的是字符串,有的返回的是字符串组成的列表。比如,name属性的值是唯一的,返回的结果就是单个字符串,而对于class,一个节点的元素可能有多个class,所以返回的是列表。另外,这里的p节点是第一个p节点
嵌套选择或层级选择
如果多个节点层级嵌套在一起,可以通过层级关系依次选择,比如要选择title节点及其内容,之前我们是soup.title,现在可以这样操作:soup.head.title
html = """
<html><head><title>The Dormouse's story</title></head>
<body>
"""
print(soup.head.title)
print(type(soup.head.title))
print(soup.head.title.string)
#输出结果:
<title>The Dormouse's story</title>
<class 'bs4.element.Tag'>
The Dormouse's story
2. 节点选择器:高级用法
父节点和祖先节点
如果要获取某个节点元素的父节点,可以调用parent属性
html = """
<html>
<head>
<title>The Dormouse's story</title>
</head>
<body>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">
<span>Elsie</span>
</a>
</p>
<p class="story>...</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.a.parent)
#输出结果:
<p class="story">
Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">
<span>Elsie</span>
</a>
</p>
这我们选择的是第一个a节点的父节点元素,很明显,它的父节点是p节点,输出结果便是p节点及其内部的内容
如果想要获取所有的祖先元素,可以调用parents属性:
html = """
<html>
<body>
<p class="story">
<a href="http://example.com/elsie" class="sister" id="link1">
<span>Elsie</span>
</a>
</p>
"""
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(type(soup.a.parents))
print(list(enumerate(soup.a.parents)))
#运行结果:
<class 'generator'>
[(0, <p class="story">
<a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>
</p>), (1, <body>
<p class="story">
<a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>
</p>
</body>), (2, <html>
<body>
<p class="story">
<a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>
</p>
</body></html>), (3, <html>
<body>
<p class="story">
<a class="sister" href="http://example.com/elsie" id="link1">
<span>Elsie</span>
</a>
</p>
</body></html>)]
这里为什么出现了两个html开头的文本呢?是因为parents遍历的顺序是p—body—html—[document]
子节点和子孙节点
选取节点元素知乎,如果想要获取它的直接子节点,可以调用contents属性:
html = """
<html>
<head>
<title>The Dormouse's story</title>
</head>
<body>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elise" class="sister" id="link1">
<span>Elise</span>
</a>
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
and they lived at the bottom of a well.
</p>
<p class="story">...</p>
"""
可以看到,返回结果是列表形式。p节点里既包含文本,又包含节点
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.p.contents)
#运行结果:
['
Once upon a time there were three little sisters; and their names were
', <a class="sister" href="http://example.com/elise" id="link1">
<span>Elise</span>
</a>, '
', <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>, '
and
', <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>, '
and they lived at the bottom of a well.
']
span节点作为p节点的孙子节点,并没有单独列出,而是包含在a中被列出,说明contents属性得到的结果是直接子节点的列表
同样,我们可以调用children属性得到相应的结果:
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.p.children)
for i, child in enumerate(soup.p.children):
print(i, child)
#运行结果:
<list_iterator object at 0x000000000303F7B8>
0
Once upon a time there were three little sisters; and their names were
1 <a class="sister" href="http://example.com/elise" id="link1">
<span>Elise</span>
</a>
2
3 <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
4
and
5 <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>
6
and they lived at the bottom of a well.
如果还想获得所有的子孙节点的话,可以调用descendants属性:
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.p.descendants)
for i, child in enumerate(soup.p.descendants):
print(i, child)
#运行结果:
<generator object Tag.descendants at 0x000000000301F228>
0
Once upon a time there were three little sisters; and their names were
1 <a class="sister" href="http://example.com/elise" id="link1">
<span>Elise</span>
</a>
2
3 <span>Elise</span>
4 Elise
5
6
7 <a class="sister" href="http://example.com/lacie" id="link2">Lacie</a>
8 Lacie
9
and
10 <a class="sister" href="http://example.com/tillie" id="link3">Tillie</a>
11 Tillie
12
and they lived at the bottom of a well.
遍历输出可以看到,这次输出的结果就包含了span节点,descendants会递归查询所有子节点,得到所有的子孙节点
兄弟节点
如果想获取兄弟节点,应该怎么办呢?
html = """
<html>
<body>
<p class="story">
Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="link1">
<span>Elsie</span>
</a>
Hello
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a>
and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>
and they lived at the bottom of a well.
</p>
"""
next_sibling和previous_sibling分别获取节点的下一个和上一个兄弟元素,next_siblings和previous_siblings则分别返回后面和前面的兄弟节点
3. 方法选择器
find_all():查询所有符合条件的元素
html = '''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<ul class="list list-small" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
</ul>
</div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.find_all(name='ul'))
print(type(soup.find_all(name='ul')[0]))
#运行结果:
[<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>, <ul class="list list-small" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
</ul>]
<class 'bs4.element.Tag'>
利用find_all方法查询ul节点,返回结果是列表类型,长度为2,每个元素都是bs4.element.Tag类型
还可以进行嵌套查询,获取li节点的文本内容
for ul in soup.find_all(name='ul'):
print(ul.find_all(name='li'))
for li in ul.find_all(name='li'):
print(li.string)
#输出结果:
[<li class="element">Foo</li>, <li class="element">Bar</li>, <li class="element">Jay</li>]
Foo
Bar
Jay
[<li class="element">Foo</li>, <li class="element">Bar</li>]
Foo
Bar
除了根据节点名查询,还可以传入一些属性来查询
html = '''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<ul class="list list-small" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
</ul>
</div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.find_all(attrs={'id': 'list-1'}))
print(soup.find_all(attrs={'name': 'elements'}))
#输出结果:
[<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>]
[<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>]
对于一些常用的属性,比如id和class等,可以不用attrs来传递。比如,要查询id为list-1的节点,可以直接传入id这个参数。还是上面的文本,我们换一种方式来查询:
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.find_all(id='list-1'))
print(soup.find_all(class_='element'))
#输出结果:
[<ul class="list" id="list-1" name="elements">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>]
[<li class="element">Foo</li>, <li class="element">Bar</li>, <li class="element">Jay</li>, <li class="element">Foo</li>, <li class="element">Bar</li>]
text参数可以用来匹配节点的文本,传入的形式可以是字符串,可以是正则表达式对象
html = '''
<div class="panel">
<div class="panel-body">
<a>Hello, this is a link</a>
<a>Hello, this is a link, too</a>
</div>
</div>
'''
from bs4 import BeautifulSoup
import re
soup = BeautifulSoup(html, 'lxml')
print(soup.find_all(text=re.compile('link')))
#输出结果:
['Hello, this is a link', 'Hello, this is a link, too']
find():返回单个元素,也就是第一个匹配的元素
html = '''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<ul class="list list-small" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
</ul>
</div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.find(name='ul'))
print(type(soup.find(name='ul')))
print(soup.find(class_='list'))
#输出结果:
<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<class 'bs4.element.Tag'>
<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
其他的查询方法
find_parents()和find_parent():前者返回所有祖先节点,后者返回直接父节点
find_next_siblings()和find_next_sibling():前者返回后面所有的兄弟节点,后者返回后面第一个兄弟节点
find_previous_siblings()和find_previous_sibling():前者返回前面所有的兄弟节点,后者返回前面第一个兄弟节点
find_all_next()和find_next():前者返回节点后所有符合条件的节点,后者返回第一个符合条件的节点
find_all_previous()和find_previous():前者返回节点前所有符合条件的节点,后者返回第一个符合条件的节点
3. CSS选择器
使用CSS选择器时,只需要调用select()方法,传入相应的CSS选择器即可
html = '''
<div class="panel">
<div class="panel-heading">
<h4>Hello</h4>
</div>
<div class="panel-body">
<ul class="list" id="list-1">
<li class="element">Foo</li>
<li class="element">Bar</li>
<li class="element">Jay</li>
</ul>
<ul class="list list-small" id="list-2">
<li class="element">Foo</li>
<li class="element">Bar</li>
</ul>
</div>
</div>
'''
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
print(soup.select('.panel .panel-heading'))
print(soup.select('ul li'))
print(soup.select('#list-2 .element'))
print(type(soup.select('ul')[0]))
#输出结果:
[<div class="panel-heading">
<h4>Hello</h4>
</div>]
[<li class="element">Foo</li>, <li class="element">Bar</li>, <li class="element">Jay</li>, <li class="element">Foo</li>, <li class="element">Bar</li>]
[<li class="element">Foo</li>, <li class="element">Bar</li>]
<class 'bs4.element.Tag'>
嵌套选择
select()方法同样支持嵌套选择。例如,先选择所有ul节点,再遍历每个ul节点
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
for ul in soup.select('ul'):
print(ul.select('li'))
#输出结果:
[<li class="element">Foo</li>, <li class="element">Bar</li>, <li class="element">Jay</li>]
[<li class="element">Foo</li>, <li class="element">Bar</li>]
可以看到,这里正常输出了所有ul节点下所有li节点组成的列表
获取属性
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
for ul in soup.select('ul'):
print(ul['id'])
print(ul.attrs['id'])
#输出结果:
list-1
list-1
list-2
list-2
可以看到,直接传入中括号和属性名,或通过attrs属性获取属性值,都可以成功
获取文本
要获取文本,可以用前面所讲的string属性或者get_text()方法
from bs4 import BeautifulSoup
soup = BeautifulSoup(html, 'lxml')
for li in soup.select('li'):
print('Get Text:', li.get_text())
print('String:', li.string)
#输出结果:
Get Text: Foo
String: Foo
Get Text: Bar
String: Bar
Get Text: Jay
String: Jay
Get Text: Foo
String: Foo
Get Text: Bar
String: Bar
以上是关于BeautifulSoup的主要内容,如果未能解决你的问题,请参考以下文章
Python练习册 第 0013 题: 用 Python 写一个爬图片的程序,爬 这个链接里的日本妹子图片 :-),(http://tieba.baidu.com/p/2166231880)(代码片段
python 爬虫 requests+BeautifulSoup 爬取简单网页代码示例
python3 requests + BeautifulSoup 爬取阳光网投诉贴详情实例代码
使用 Python 和 BeautifulSoup(将网页源代码保存到本地文件中)