Crawler——链接爬虫

Posted 星影L

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Crawler——链接爬虫相关的知识,希望对你有一定的参考价值。

对数据的提取和收集也是数据分析中一大重点,所以,学习爬虫是非常有用的。完成数据采集,对后面的数据分析做下基础。

今天,要介绍的是来自《Web Scraping With Python》中的一个示例——链接爬虫。对于此类进行了简单的总结,便于相互学习。

#! /usr/bin/env python
# -*- coding:utf-8 -*-
import re
import urlparse
import urllib2
import time
from datetime import datetime
import robotparser
import Queue
# 链接爬虫
‘‘‘
一个链接爬虫需要考虑以下几个问题:
1.下载网页时,我们可能会遇到不可控制的错误,比如请求的网页可能不存在。就要用到try和except语句,捕获异常。
2.下载网页时,我们也可能会遇上临时性的错误,比如服务器过载返回的503 Service Unavailable错误。就要多尝试几次下载。
3.一些网站可能会封杀默认的用户代理,所以,我们应该重新设置一个用户代理user_agent=‘wswp‘。
4.下载网站链接时,应当考虑符合自己目标的链接,筛选出自己感谢的东西。通常用正则表达式来匹配这些链接。‘<a[^>]+href=["\‘](.*?)["\‘]‘.
5.应当考虑网页中的链接是什么链接,如果是绝对链接就没事,如果是相对链接就应该创建绝对链接。urlparse.urljoin()
6.爬取网页的时候,经常会出现将要爬取的网页中也有爬取过的链接,这样会造成不断循环。所以要建立一个URL管理器,管理爬取过的和未爬取的
7.所有爬虫都应当遵守爬虫协议(robots.txt),所以要引入robotparser模块,以避免下载禁止爬取的URL
8.有时我们需要使用代理访问某个网站。
9.如果我们爬取网站的速度过快,就会面临被封禁或者服务器过载的风险。所以应当在两次下载之间添加延时。delay
10.有些网站中含有动态内容,如果爬取该网页就会出现无限制的网页,所以为了避免爬虫陷阱,最好设置一个爬取深度(max_depth)——记录到达当前网页经过了多少链接。
‘‘‘
def link_crawler(seed_url, link_regex=None, delay=5, max_depth=-1, max_urls=-1, headers=None, user_agent=‘wswp‘, proxy=None, num_retries=1):
    """Crawl from the given seed URL following links matched by link_regex
    """
    # the queue of URL‘s that still need to be crawled
    crawl_queue = Queue.deque([seed_url])
    # the URL‘s that have been seen and at what depth
    seen = {seed_url: 0}
    # track how many URL‘s have been downloaded
    num_urls = 0
    rp = get_robots(seed_url)
    throttle = Throttle(delay)
    headers = headers or {}
    if user_agent:
        headers[‘User-agent‘] = user_agent

    while crawl_queue:
        url = crawl_queue.pop()
        # check url passes robots.txt restrictions
        if rp.can_fetch(user_agent, url):
            throttle.wait(url)
            html = download(url, headers, proxy=proxy, num_retries=num_retries)
            links = []

            depth = seen[url]
            if depth != max_depth:
                # can still crawl further
                if link_regex:
                    # filter for links matching our regular expression
                    for link in get_links(html):
                        if re.match(link_regex, link):
                            links.extend(link)
                    # links.extend(link for link in get_links(html) if re.match(link_regex, link))

                for link in links:
                    link = normalize(seed_url, link)
                    # check whether already crawled this link
                    if link not in seen:
                        seen[link] = depth + 1
                        # check link is within same domain
                        if same_domain(seed_url, link):
                            # success! add this new link to queue
                            crawl_queue.append(link)

            # check whether have reached downloaded maximum
            num_urls += 1
            if num_urls == max_urls:
                break
        else:
            print ‘Blocked by robots.txt:‘, url


class Throttle:
    """Throttle downloading by sleeping between requests to same domain
    """
    def __init__(self, delay):
        # amount of delay between downloads for each domain
        self.delay = delay
        # timestamp of when a domain was last accessed
        self.domains = {}

    def wait(self, url):
        domain = urlparse.urlparse(url).netloc
        last_accessed = self.domains.get(domain)

        if self.delay > 0 and last_accessed is not None:
            sleep_secs = self.delay - (datetime.now() - last_accessed).seconds
            if sleep_secs > 0:
                time.sleep(sleep_secs)
        self.domains[domain] = datetime.now()


def download(url, headers, proxy, num_retries, data=None):
    print ‘Downloading:‘, url
    request = urllib2.Request(url, data, headers)
    opener = urllib2.build_opener()
    if proxy:
        proxy_params = {urlparse.urlparse(url).scheme: proxy}
        opener.add_handler(urllib2.ProxyHandler(proxy_params))
    try:
        response = opener.open(request)
        html = response.read()
        code = response.code
    except urllib2.URLError as e:
        print ‘Download error:‘, e.reason
        html = ‘‘
        if hasattr(e, ‘code‘):
            code = e.code
            if num_retries > 0 and 500 <= code < 600:
                # retry 5XX HTTP errors
                return download(url, headers, proxy, num_retries-1, data)
        else:
            code = None
    return html


def normalize(seed_url, link):
    """Normalize this URL by removing hash and adding domain
    """
    link, _ = urlparse.urldefrag(link) # remove hash to avoid duplicates
    return urlparse.urljoin(seed_url, link)


def same_domain(url1, url2):
    """Return True if both URL‘s belong to same domain
    """
    return urlparse.urlparse(url1).netloc == urlparse.urlparse(url2).netloc


def get_robots(url):
    """Initialize robots parser for this domain
    """
    rp = robotparser.RobotFileParser()
    rp.set_url(urlparse.urljoin(url, ‘/robots.txt‘))
    rp.read()
    return rp


def get_links(html):
    """Return a list of links from html
    """
    # a regular expression to extract all links from the webpage
    webpage_regex = re.compile(‘<a[^>]+href=["\‘](.*?)["\‘]‘, re.IGNORECASE)
    # list of all links from the webpage
    return webpage_regex.findall(html)


if __name__ == ‘__main__‘:
    link_crawler(‘http://example.webscraping.com‘, ‘/(index|view)‘, delay=0, num_retries=1, user_agent=‘BadCrawler‘)
    link_crawler(‘http://example.webscraping.com‘, ‘/(index|view)‘, delay=0, num_retries=1, max_depth=-1, user_agent=‘GoodCrawler‘)

  

以上是关于Crawler——链接爬虫的主要内容,如果未能解决你的问题,请参考以下文章

crawler4j 学习

爬虫实战国家企业公示网-crawler爬虫抓取数据

爬虫日记(84):Scrapy的Crawler类

爬虫日记(84):Scrapy的Crawler类

go并发版爬虫

爬虫日记(84):Scrapy的Crawler类