使用Python爬虫库BeautifulSoup遍历文档树并对标签进行操作详解(新手必学)

Posted shabge

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了使用Python爬虫库BeautifulSoup遍历文档树并对标签进行操作详解(新手必学)相关的知识,希望对你有一定的参考价值。

为大家介绍下Python爬虫库BeautifulSoup遍历文档树并对标签进行操作的详细方法与函数
下面就是使用Python爬虫库BeautifulSoup对文档树进行遍历并对标签进行操作的实例,都是最基础的内容

需要代码的同学可以添加群624440745

不懂的问题有老司机解决里面还有最新Python教程项目可拿,,一起相互监督共同进步!

html_doc = """
<html><head><title>The Dormouse‘s story</title></head>

<p class="title"><b>The Dormouse‘s story</b></p>

<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link1">Elsie</a>,
<a href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link3">Tillie</a>;
and they lived at the bottom of a well.</p>

<p class="story">...</p>
"""

from bs4 import BeautifulSoup
soup = BeautifulSoup(html_doc,‘lxml‘)

一、子节点

一个Tag可能包含多个字符串或者其他Tag,这些都是这个Tag的子节点.BeautifulSoup提供了许多操作和遍历子结点的属性。

1.通过Tag的名字来获得Tag

print(soup.head)
print(soup.title)
1
2
<head><title>The Dormouse‘s story</title></head>
<title>The Dormouse‘s story</title>
1
2
通过名字的方法只能获得第一个Tag,如果要获得所有的某种Tag可以使用find_all方法

soup.find_all(‘a‘)
1
[<a class="sister" href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>,
<a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>,
<a class="sister" href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>]

2.contents属性:将Tag的子节点通过列表的方式返回

head_tag = soup.head
head_tag.contents
1
2
[<title>The Dormouse‘s story</title>]
1
title_tag = head_tag.contents[0]
title_tag
1
2
<title>The Dormouse‘s story</title>
1
title_tag.contents
1
["The Dormouse‘s story"]
1
3.children:通过该属性对子节点进行循环

for child in title_tag.children:
print(child)
1
2
The Dormouse‘s story
1
4.descendants: 不论是contents还是children都是返回直接子节点,而descendants对所有tag的子孙节点进行递归循环

for child in head_tag.children:
print(child)

```bash

```
for child in head_tag.descendants:
print(child)

<title>The Dormouse‘s story</title>
The Dormouse‘s story

5.string 如果tag只有一个NavigableString类型的子节点,那么tag可以使用.string得到该子节点

title_tag.string
1
"The Dormouse‘s story"
1
如果一个tag只有一个子节点,那么使用.string可以获得其唯一子结点的NavigableString.

head_tag.string
1
head_tag.string
1
如果tag有多个子节点,tag无法确定.string对应的是那个子结点的内容,故返回None
print(soup.html.string)

None
1
6.strings和stripped_strings

如果tag包含多个字符串,可以使用.strings循环获取

for string in soup.strings:
print(string)
1
2
The Dormouse‘s story


The Dormouse‘s story


Once upon a time there were three little sisters; and their names were

Elsie
,

Lacie
and

Tillie
;
and they lived at the bottom of a well.


...

.string输出的内容包含了许多空格和空行,使用strpped_strings去除这些空白内容

for string in soup.stripped_strings:
print(string)
1
2
The Dormouse‘s story
The Dormouse‘s story
Once upon a time there were three little sisters; and their names were
Elsie
,
Lacie
and
Tillie
;
and they lived at the bottom of a well.
...

二、父节点

1.parent:获得某个元素的父节点

title_tag = soup.title
title_tag.parent
1
2
<head><title>The Dormouse‘s story</title></head>
1
字符串也有父节点

title_tag.string.parent
1
<title>The Dormouse‘s story</title>
1
2.parents:递归的获得所有父辈节点

link = soup.a
for parent in link.parents:
if parent is None:
print(parent)
else:
print(parent.name)

p
body
html
[document]

三、兄弟结点

sibling_soup = BeautifulSoup("<a><b>text1</b><c>text2</c></b></a>",‘lxml‘)
print(sibling_soup.prettify())
1
2
<html>
<body>
<a>
<b>
text1
</b>
<c>
text2
</c>
</a>
</body>
</html>

1.next_sibling和previous_sibling

sibling_soup.b.next_sibling
1
<c>text2</c>
1
sibling_soup.c.previous_sibling
1
<b>text1</b>
1
在实际文档中.next_sibling和previous_sibling通常是字符串或者空白符

soup.find_all(‘a‘)
1
[<a class="sister" href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>,
<a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>,
<a class="sister" href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>]

soup.a.next_sibling # 第一个<a></a>的next_sibling是,

```bash

‘, ’


```bash
soup.a.next_sibling.next_sibling

<a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>
1
2.next_siblings和previous_siblings

for sibling in soup.a.next_siblings:
print(repr(sibling))
1
2
‘, ‘
<a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>
‘ and ‘
<a class="sister" href="http://example.com/tillie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>
‘; and they lived at the bottom of a well.‘

for sibling in soup.find(id="link3").previous_siblings:
print(repr(sibling))
1
2
‘ and ‘
<a class="sister" href="http://example.com/lacie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>
‘, ‘
<a class="sister" href="http://example.com/elsie" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>
‘Once upon a time there were three little sisters; and their names were ‘

四、回退与前进

1.next_element和previous_element

指向下一个或者前一个被解析的对象(字符串或tag),即深度优先遍历的后序节点和前序节点

last_a_tag = soup.find("a", id="link3")
print(last_a_tag.next_sibling)
print(last_a_tag.next_element)

;

and they lived at the bottom of a well.
Tillie
1
2
last_a_tag.previous_element
1
‘ and ‘
1
2.next_elements和previous_elements

通过.next_elements和previous_elements可以向前或向后访问文档的解析内容,就好像文档正在被解析一样

for element in last_a_tag.next_elements:
print(repr(element))
1
2
‘Tillie‘
‘; and they lived at the bottom of a well.‘
‘ ‘
<p class="story">...</p>
‘...‘
‘ ‘
————————————————
注意:很多人学Python过程中会遇到各种烦恼问题解决不了。为此小编建了个Python全栈免费答疑交流.裙 :624440745,不懂的问题有老司机解决里面还有最新Python教程项目可拿,,一起相互监督共同进步!
本文的文字及图片来源于网络加上自己的想法,仅供学习、交流使用,不具有任何商业用途,版权归原作者所有,如有问题请及时联系我们以作处理。

 

以上是关于使用Python爬虫库BeautifulSoup遍历文档树并对标签进行操作详解(新手必学)的主要内容,如果未能解决你的问题,请参考以下文章

Python爬虫五 BeautifulSoup库

Python爬虫:BeautifulSoup库

爬虫基础库 — beautifulsoup

爬虫基础库

python爬虫——BeautifulSoup

python爬虫练习之requests+BeautifulSoup库,提取影片信息,并保存至excel