读BeautifulSoup官方文档之html树的修改
Posted 内脏坏了
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了读BeautifulSoup官方文档之html树的修改相关的知识,希望对你有一定的参考价值。
修改html树无非是对其中标签的改动, 改动标签的名字(也就是类型), 属性和标签里的内容... 先讲这边提供了很方便的方法来对其进行改动...
1 soup = BeautifulSoup(‘<b class="boldest">Extremely bold</b>‘) 2 tag = soup.b 3 4 tag.name = "blockquote" 5 tag[‘class‘] = ‘verybold‘ 6 tag[‘id‘] = 1 7 tag 8 # <blockquote class="verybold" id="1">Extremely bold</blockquote> 9 10 del tag[‘class‘] 11 del tag[‘id‘] 12 tag 13 # <blockquote>Extremely bold</blockquote>
然后是改动内容 :
markup = ‘<a href="http://example.com/">I linked to <i>example.com</i></a>‘ soup = BeautifulSoup(markup) tag = soup.a tag.string = "New link text." tag # <a href="http://example.com/">New link text.</a>
当然你还可以用append(), 我让我奇怪的是使用append()之后的效果看上去是一样的, 但是调用.contents却会发现其实append()是在.contents代表的那个list中append. 另一方面, 你还可以用一个 NavigableString替代String, 也是一样的.
soup = BeautifulSoup("<a>Foo</a>") soup.a.append("Bar") soup # <html><head></head><body><a>FooBar</a></body></html> soup.a.contents # [u‘Foo‘, u‘Bar‘] soup = BeautifulSoup("<b></b>") tag = soup.b tag.append("Hello") new_string = NavigableString(" there") tag.append(new_string) tag # <b>Hello there.</b> tag.contents # [u‘Hello‘, u‘ there‘]
当然你还可以用append()在tag内部添加注释 :
from bs4 import Comment new_comment = Comment("Nice to see you.") tag.append(new_comment) tag # <b>Hello there<!--Nice to see you.--></b> tag.contents # [u‘Hello‘, u‘ there‘, u‘Nice to see you.‘]
你甚至可以直接创建一个新的tag, 对于new_tag(), 它只有第一个参数是必须的 :
soup = BeautifulSoup("<b></b>") original_tag = soup.b new_tag = soup.new_tag("a", href="http://www.example.com") original_tag.append(new_tag) original_tag # <b><a href="http://www.example.com"></a></b> new_tag.string = "Link text." original_tag # <b><a href="http://www.example.com">Link text.</a></b>
除了append()还有insert(), 它的插入, 从原理上来看也是插入了.contents 返回的那个list.
markup = ‘<a href="http://example.com/">I linked to <i>example.com</i></a>‘ soup = BeautifulSoup(markup) tag = soup.a tag.insert(1, "but did not endorse ") tag # <a href="http://example.com/">I linked to but did not endorse <i>example.com</i></a> tag.contents # [u‘I linked to ‘, u‘but did not endorse‘, <i>example.com</i>]
有关insert还有insert_before()和insert_after(), 就是插在当前调用tag的前面和后面.
soup = BeautifulSoup("<b>stop</b>") tag = soup.new_tag("i") tag.string = "Don‘t" soup.b.string.insert_before(tag) soup.b # <b><i>Don‘t</i>stop</b> soup.b.i.insert_after(soup.new_string(" ever ")) soup.b # <b><i>Don‘t</i> ever stop</b> soup.b.contents # [<i>Don‘t</i>, u‘ ever ‘, u‘stop‘]
clear()很简单, 就是把所调用标签的内容全部清除.
markup = ‘<a href="http://example.com/">I linked to <i>example.com</i></a>‘ soup = BeautifulSoup(markup) tag = soup.a tag.clear() tag # <a href="http://example.com/"></a>
比clear()更神奇的是另外一个extract(), extract()能够讲所调用的tag从html树中抽出, 同时返回提取的tag, 此时原来html树的该tag被删除.
markup = ‘<a href="http://example.com/">I linked to <i>example.com</i></a>‘ soup = BeautifulSoup(markup) a_tag = soup.a i_tag = soup.i.extract() a_tag # <a href="http://example.com/">I linked to</a> i_tag # <i>example.com</i> print(i_tag.parent) None
decompose()和extract()的区别就在于它完全毁掉所调用标签, 而不返回(这里要注意remove()是毁掉所调用标签的内容...).
markup = ‘<a href="http://example.com/">I linked to <i>example.com</i></a>‘ soup = BeautifulSoup(markup) a_tag = soup.a soup.i.decompose() a_tag # <a href="http://example.com/">I linked to</a>
replace_with()能够将其调用标签调换成参数标签, 同时返回调用标签...(相当于比extract()多了一个insert()的步骤)
markup = ‘<a href="http://example.com/">I linked to <i>example.com</i></a>‘ soup = BeautifulSoup(markup) a_tag = soup.a new_tag = soup.new_tag("b") new_tag.string = "example.net" a_tag.i.replace_with(new_tag) a_tag # <a href="http://example.com/">I linked to <b>example.net</b></a>
还有wrap()和unwrap(), 好像是用参数标签包住调用标签, 而unwrap()则用调用标签的内容替代调用标签本身.
1 soup = BeautifulSoup("<p>I wish I was bold.</p>") 2 soup.p.string.wrap(soup.new_tag("b")) 3 # <b>I wish I was bold.</b> 4 5 soup.p.wrap(soup.new_tag("div") 6 # <div><p><b>I wish I was bold.</b></p></div> 7 8 markup = ‘<a href="http://example.com/">I linked to <i>example.com</i></a>‘ 9 soup = BeautifulSoup(markup) 10 a_tag = soup.a 11 12 a_tag.i.unwrap() 13 a_tag 14 # <a href="http://example.com/">I linked to example.com</a>
以上是关于读BeautifulSoup官方文档之html树的修改的主要内容,如果未能解决你的问题,请参考以下文章