网络爬虫2:使用crawler4j爬取网络内容

Posted Andy 胡

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了网络爬虫2:使用crawler4j爬取网络内容相关的知识,希望对你有一定的参考价值。

需要两个包:

  crawler4j-4.1-jar-with-dependencies.jar

  slf4j-simple-1.7.22.jar(如果不加,会有警告:SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".)

 相关包下载:

http://download.csdn.net/detail/talkwah/9747407

 

(crawler4j-4.1-jar-with-dependencies.jar相关资料少,github下载半天还失败,故整理了一下)

参考资料:

http://blog.csdn.net/zjm131421/article/details/13093869

 

http://favccxx.blog.51cto.com/2890523/1691079/

import java.util.Set;
import java.util.regex.Pattern;

import edu.uci.ics.crawler4j.crawler.CrawlConfig;
import edu.uci.ics.crawler4j.crawler.CrawlController;
import edu.uci.ics.crawler4j.crawler.Page;
import edu.uci.ics.crawler4j.crawler.WebCrawler;
import edu.uci.ics.crawler4j.fetcher.PageFetcher;
import edu.uci.ics.crawler4j.parser.htmlParseData;
import edu.uci.ics.crawler4j.robotstxt.RobotstxtConfig;
import edu.uci.ics.crawler4j.robotstxt.RobotstxtServer;
import edu.uci.ics.crawler4j.url.WebURL;

public class MyCrawler extends WebCrawler {
    // 三要素:
    // _访问谁?
    // _怎么访?
    // _访上了怎么处置?
    private static final String C_URL = "http://www.ximalaya.com";

    @Override
    public boolean shouldVisit(Page referringPage, WebURL url) {
        String href = url.getURL().toLowerCase();
        // 不匹配:MP3|jpg|png结尾的资源
        Pattern p = Pattern.compile(".*(\\.(MP3|jpg|png))$");
        return !p.matcher(href).matches() && href.startsWith(C_URL);
    }

    @Override
    public void visit(Page page) {

        String url = page.getWebURL().getURL();
        String parentUrl = page.getWebURL().getParentUrl();
        String anchor = page.getWebURL().getAnchor();
        System.out.println("********************************");
        System.out.println("URL        :" + url);
        System.out.println("Parent page:" + parentUrl);
        System.out.println("Anchor text:" + anchor);

        logger.info("URL: {}", url);
        logger.debug("Parent page: {}", parentUrl);
        logger.debug("Anchor text: {}", anchor);

        if (page.getParseData() instanceof HtmlParseData) {
            HtmlParseData htmlParseData = (HtmlParseData) page.getParseData();
            String text = htmlParseData.getText();
            String html = htmlParseData.getHtml();
            Set<WebURL> links = htmlParseData.getOutgoingUrls();
            System.out.println("--------------------------");
            System.out.println("Text length: " + text.length());
            System.out.println("Html length: " + html.length());
            System.out.println("Number of outgoing links: " + links.size());
        }
    }

    public static void main(String[] args) throws Exception {
        // 源代码例子中,这两位是两只参数
        // 配置个路径,这个路径相当于Temp文件夹,不用先建好,
        String crawlStorageFolder = "/data/crawl/root";
        int numberOfCrawlers = 7;

        CrawlConfig crawlConf = new CrawlConfig();
        crawlConf.setCrawlStorageFolder(crawlStorageFolder);
        PageFetcher pageFetcher = new PageFetcher(crawlConf);

        RobotstxtConfig robotConf = new RobotstxtConfig();
        RobotstxtServer robotServ = new RobotstxtServer(robotConf, pageFetcher);

        // 控制器
        CrawlController c = new CrawlController(crawlConf,
                pageFetcher, robotServ);
        // 添加网址
        c.addSeed(C_URL);

        // 启动爬虫
        c.start(MyCrawler.class, numberOfCrawlers);
    }
}

 

以上是关于网络爬虫2:使用crawler4j爬取网络内容的主要内容,如果未能解决你的问题,请参考以下文章

开发网络爬虫应该如何选择爬虫框架?

crawler4j 学习

爬取城市GDP排名

什么是java爬虫

python3--网络爬虫--爬取图片

Python高级应用程序设计任务