python 爬虫

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了python 爬虫相关的知识,希望对你有一定的参考价值。

爬遍整个网络

1 当我们访问整个网络的时候,我们不可避免的会访问不同的网站,但是不同的网站会有完全不同的结构和内容...

现在一步一步的构建访问整个网络的脚本

I 从一个网站开始,每一次都爬向不同的网站。如果在一个页面找不到指向其他网站的链接,获取本网站其他界面信息,直到找到其他网站的链接。

技术分享
# -*- coding:utf-8 -*-  

from urllib.request import urlopen
from urllib.error import HTTPError
from bs4 import BeautifulSoup
from random import choice
import re

basename = "http://en.wikipedia.org"
visitedpages = set()

def getInternalLinks(bsObj,includeUrl):
    return [eachlink.attrs[href] for eachlink in bsObj.find_all("a",href=re.compile("^(/|.*" + includeUrl + ")")) if href in eachlink.attrs]

def getExternalLinks(bsObj,excludeUrl):
    return [eachlink.attrs[href] for eachlink in bsObj.find_all("a",href=re.compile("^(http|www)((?!" + excludeUrl + ").)*$")) if href in eachlink.attrs]

def splitAddress(address):
    addressParts = address.replace("http://","").split("/")
    return addressParts

def getRandomExternalLink(startingPage):
    html = urlopen(startingPage)
    with html:
        bsObj = BeautifulSoup(html,"html.parser")
    externalLinks = getExternalLinks(bsObj,splitAddress(startingPage)[0])
    if len(externalLinks) == 0:
        internalLinks = getInternalLinks(bsObj, splitAddress(startingPage)[0])
        return choice(internalLinks)
    else:
        return choice(externalLinks)

def followExternalLink(startingPage):
    externalLink = getRandomExternalLink("http://www.oreilly.com/")
    if externalLink in visitedpages:
        print("visited")
    else:    
        print("the random external link is   " + externalLink)
        visitedpages.add(externalLink)
        followExternalLink(externalLink)


if __name__ == "__main__":
    #print(splitAddress("http://www.oreilly.com/")[0])
    #print(getRandomExternalLink("http://www.oreilly.com/"))
    followExternalLink("http://www.oreilly.com/")    
             
View Code

II 从一个网站开始,查找这个网站所有界面信息,获取整个网站指向其他网站的链接

技术分享
# -*- coding:utf-8 -*-  

from urllib.request import urlopen
from urllib.error import HTTPError
from bs4 import BeautifulSoup
from random import choice
import re

def getInternalLinks(bsObj,includeUrl):
    return [eachlink.attrs[href] for eachlink in bsObj.find_all("a",href=re.compile("^(/|.*" + includeUrl + ")")) if href in eachlink.attrs]

def getExternalLinks(bsObj,excludeUrl):
    return [eachlink.attrs[href] for eachlink in bsObj.find_all("a",href=re.compile("^(http|www)((?!" + excludeUrl + ").)*$")) if href in eachlink.attrs]

def splitAddress(address):
    addressParts = address.replace("http://","").split("/")
    return addressParts

allINlinks = set()
allEXlinks = set()
def getAllexternalLinks(startPage):
    try:
        with urlopen(startPage) as html:
            bsObj = BeautifulSoup(html,"html.parser")
    except HTTPError as e:
        print(e)
    else:
        allinternallinks = getInternalLinks(bsObj,splitAddress(startPage)[0])
        allexternallinks = getExternalLinks(bsObj,splitAddress(startPage)[0])
        print("************external*******************************")
        for eachexternallink in allexternallinks:
            if eachexternallink not in allEXlinks:
                allEXlinks.add(eachexternallink)
                print(eachexternallink)
        print("************internal*******************************")
        for eachinternallink in allinternallinks:
            if eachinternallink not in allINlinks:
                allINlinks.add(eachinternallink)
                print(eachinternallink)
                getAllexternalLinks(eachinternallink)

if __name__ == "__main__":
    getAllexternalLinks("http://www.oreilly.com/")    
View Code

   ***************还存在问题的代码***************************

以上是关于python 爬虫的主要内容,如果未能解决你的问题,请参考以下文章

Python练习册 第 0013 题: 用 Python 写一个爬图片的程序,爬 这个链接里的日本妹子图片 :-),(http://tieba.baidu.com/p/2166231880)(代码片段

python爬虫学习笔记-M3U8流视频数据爬虫

爬虫遇到头疼的验证码?Python实战讲解弹窗处理和验证码识别

python网络爬虫

Python 利用爬虫爬取网页内容 (div节点的疑惑)

为啥我的python爬虫界面与博主不一样