python3用BeautifulSoup用re.compile来匹配需要抓取的href地址

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了python3用BeautifulSoup用re.compile来匹配需要抓取的href地址相关的知识,希望对你有一定的参考价值。

# -*- coding:utf-8 -*-
#python 2.7
#XiaoDeng
#http://tieba.baidu.com/p/2460150866
#标签操作


from bs4 import BeautifulSoup
import urllib.request
import re


#如果是网址,可以用这个办法来读取网页
#html_doc = "http://tieba.baidu.com/p/2460150866"
#req = urllib.request.Request(html_doc)  
#webpage = urllib.request.urlopen(req)  
#html = webpage.read()



html="""
<html><head><title>The Dormouse‘s story</title></head>
<body>
<p class="title" name="dromouse"><b>The Dormouse‘s story</b></p>
<p class="story">Once upon a time there were three little sisters; and their names were
<a href="http://example.com/elsie" class="sister" id="xiaodeng"><!-- Elsie --></a>,
<a href="http://example.com/lacie" class="sister" id="link2">Lacie</a> and
<a href="http://example.com/tillie" class="sister" id="link3">Tillie</a>;
<a href="http://example.com/lacie" class="sister" id="xiaodeng">Lacie</a>
and they lived at the bottom of a well.</p>
<p class="story">...</p>
"""
soup = BeautifulSoup(html, html.parser)   #文档对象


#re.compile来匹配需要抓取的href地址
for k in  soup.find_all(href=re.compile("lacie")):
    print(k)


for k in  soup.find_all(string=re.compile("Lacie")):
    print(k)

 

以上是关于python3用BeautifulSoup用re.compile来匹配需要抓取的href地址的主要内容,如果未能解决你的问题,请参考以下文章

用requests库和BeautifulSoup4库爬取新闻列表

93解析库之re,Beautifulsoup

Python豆瓣书籍信息爬虫

python爬虫获取中国天气网天气数据 requests BeautifulSoup re

BeautifulSoup,一碗美丽的汤,一个隐藏的大坑

用python的BeautifulSoup分析html