Python3.X BeautifulSoup([your markup], "lxml") markup_type=markup_type))的解决方案

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Python3.X BeautifulSoup([your markup], "lxml") markup_type=markup_type))的解决方案相关的知识,希望对你有一定的参考价值。

 1 random.seed(datetime.datetime.now())
 2 def getLinks(articleUrl):
 3     html = urlopen("http://en.wikipedia.org"+articleUrl)
 4     bsOdj = BeautifulSoup(html)
 5     return bsOdj.find("div",{"id":"bodyContent"}).findAll("a",href=re.compile("^(/wiki/)((?!:).)*$"))
 6 links = getLinks("/wiki/Kevin_Bacon")
 7 while len(links) > 0:
 8     newArticle = links[random.randint(0,len(links)-1)].attrs["href"]
 9     print(newArticle)
10     links = getLinks(newArticle)

这是我的源代码,然后报了警告

D:\Anaconda3\lib\site-packages\bs4\__init__.py:181: UserWarning: No parser was explicitly specified, so Im using the best available HTML parser for this system ("lxml"). This usually isnt a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.

The code that caused this warning is on line 16 of the file D:/ThronePython/Python3 网络数据爬取/BeautifulSoup 爬虫_开始爬取/BeautifulSoup 维基百科六度分割_构建从一个页面到另一个页面的爬虫.py. To get rid of this warning, change code that looks like this:

 BeautifulSoup([your markup])

to this:

 BeautifulSoup([your markup], "lxml")

  markup_type=markup_type))

百度后发现,其实这是没有设置默认的解析器造成的,

根据提示设置解析器即可,否则则采取默认的解析器,将第四行改为:

    bsOdj = BeautifulSoup(html,"lxml")

即可.

以上是关于Python3.X BeautifulSoup([your markup], "lxml") markup_type=markup_type))的解决方案的主要内容,如果未能解决你的问题,请参考以下文章

Python3.x的BeautifulSoup解析html常用函数

Python3.X BeautifulSoup([your markup], "lxml") markup_type=markup_type))的解决方案

python3 爬虫(urllib+beautifulsoup)beautifulsoup自动检测编码错误

《Python网络数据采集》笔记之BeautifulSoup

Python3.X爬虫

Python——各类库的安装(持续更新)