动态网页爬取样例(WebCollector+selenium+phantomjs)
Posted jzssuanfa
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了动态网页爬取样例(WebCollector+selenium+phantomjs)相关的知识,希望对你有一定的参考价值。
目标:动态网页爬取
说明:这里的动态网页指几种可能:1)须要用户交互,如常见的登录操作;2)网页通过JS / AJAX动态生成。如一个html里有<div id="test"></div>,通过JS生成<div id="test"><span>aaa</span></div>。
这里用了WebCollector 2进行爬虫,这东东也方便,只是要支持动态关键还是要靠另外一个API -- selenium 2(集成htmlunit 和 phantomjs).
1)须要登录后的爬取,如新浪微博
import java.util.Set; import cn.edu.hfut.dmic.webcollector.crawler.DeepCrawler; import cn.edu.hfut.dmic.webcollector.model.Links; import cn.edu.hfut.dmic.webcollector.model.Page; import cn.edu.hfut.dmic.webcollector.net.HttpRequesterImpl; import org.openqa.selenium.Cookie; import org.openqa.selenium.WebElement; import org.openqa.selenium.htmlunit.HtmlUnitDriver; import org.jsoup.nodes.Element; import org.jsoup.select.Elements; /* * 登录后爬取 * Refer: http://nutcher.org/topics/33 * https://github.com/CrawlScript/WebCollector/blob/master/README.zh-cn.md * Lib required: webcollector-2.07-bin, selenium-java-2.44.0 & its lib */ public class WebCollector1 extends DeepCrawler { public WebCollector1(String crawlPath) { super(crawlPath); /*获取新浪微博的cookie,账号密码以明文形式传输。请使用小号*/ try { String cookie=WebCollector1.WeiboCN.getSinaCookie("yourAccount", "yourPwd"); HttpRequesterImpl myRequester=(HttpRequesterImpl) this.getHttpRequester(); myRequester.setCookie(cookie); } catch (Exception e) { e.printStackTrace(); } } @Override public Links visitAndGetNextLinks(Page page) { /*抽取微博*/ Elements weibos=page.getDoc().select("div.c"); for(Element weibo:weibos){ System.out.println(weibo.text()); } /*假设要爬取评论,这里能够抽取评论页面的URL。返回*/ return null; } public static void main(String[] args) { WebCollector1 crawler=new WebCollector1("/home/hu/data/weibo"); crawler.setThreads(3); /*对某人微博前5页进行爬取*/ for(int i=0;i<5;i++){ crawler.addSeed("http://weibo.cn/zhouhongyi?vt=4&page="+i); } try { crawler.start(1); } catch (Exception e) { e.printStackTrace(); } } public static class WeiboCN { /** * 获取新浪微博的cookie。这种方法针对weibo.cn有效,对weibo.com无效 * weibo.cn以明文形式数据传输。请使用小号 * @param username 新浪微博用户名 * @param password 新浪微博密码 * @return * @throws Exception */ public static String getSinaCookie(String username, String password) throws Exception{ StringBuilder sb = new StringBuilder(); HtmlUnitDriver driver = new HtmlUnitDriver(); driver.setjavascriptEnabled(true); driver.get("http://login.weibo.cn/login/"); WebElement mobile = driver.findElementByCssSelector("input[name=mobile]"); mobile.sendKeys(username); WebElement pass = driver.findElementByCssSelector("input[name^=password]"); pass.sendKeys(password); WebElement rem = driver.findElementByCssSelector("input[name=remember]"); rem.click(); WebElement submit = driver.findElementByCssSelector("input[name=submit]"); submit.click(); Set<Cookie> cookieSet = driver.manage().getCookies(); driver.close(); for (Cookie cookie : cookieSet) { sb.append(cookie.getName()+"="+cookie.getValue()+";"); } String result=sb.toString(); if(result.contains("gsid_CTandWM")){ return result; }else{ throw new Exception("weibo login failed"); } } } }
* 这里有个自己定义路径/home/hu/data/weibo(WebCollector1 crawler=new WebCollector1("/home/hu/data/weibo");),是用来保存到嵌入式数据库Berkeley DB。
* 整体上来自Webcollector 作者的sample。
2)JS动态生成HTML元素的爬取
import java.util.List; import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.WebElement; import cn.edu.hfut.dmic.webcollector.crawler.DeepCrawler; import cn.edu.hfut.dmic.webcollector.model.Links; import cn.edu.hfut.dmic.webcollector.model.Page; /* * JS爬取 * Refer: http://blog.csdn.net/smilings/article/details/7395509 */ public class WebCollector3 extends DeepCrawler { public WebCollector3(String crawlPath) { super(crawlPath); // TODO Auto-generated constructor stub } @Override public Links visitAndGetNextLinks(Page page) { /*HtmlUnitDriver能够抽取JS生成的数据*/ // HtmlUnitDriver driver=PageUtils.getDriver(page,BrowserVersion.CHROME); // String content = PageUtils.getPhantomJSDriver(page); WebDriver driver = PageUtils.getWebDriver(page); // List<WebElement> divInfos=driver.findElementsByCssSelector("#feed_content"); List<WebElement> divInfos=driver.findElements(By.cssSelector("#feed_content span")); for(WebElement divInfo:divInfos){ System.out.println("Text是:" + divInfo.getText()); } return null; } public static void main(String[] args) { WebCollector3 crawler=new WebCollector3("/home/hu/data/wb"); for(int page=1;page<=5;page++) // crawler.addSeed("http://www.sogou.com/web?query="+URLEncoder.encode("编程")+"&page="+page); crawler.addSeed("http://cq.qq.com/baoliao/detail.htm?294064"); try { crawler.start(1); } catch (Exception e) { e.printStackTrace(); } } }
PageUtils.java
import java.io.BufferedReader; import java.io.IOException; import java.io.InputStream; import java.io.InputStreamReader; import org.openqa.selenium.JavascriptExecutor; import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeDriver; import org.openqa.selenium.htmlunit.HtmlUnitDriver; import org.openqa.selenium.ie.InternetExplorerDriver; import org.openqa.selenium.phantomjs.PhantomJSDriver; import com.gargoylesoftware.htmlunit.BrowserVersion; import cn.edu.hfut.dmic.webcollector.model.Page; public class PageUtils { public static HtmlUnitDriver getDriver(Page page) { HtmlUnitDriver driver = new HtmlUnitDriver(); driver.setJavascriptEnabled(true); driver.get(page.getUrl()); return driver; } public static HtmlUnitDriver getDriver(Page page, BrowserVersion browserVersion) { HtmlUnitDriver driver = new HtmlUnitDriver(browserVersion); driver.setJavascriptEnabled(true); driver.get(page.getUrl()); return driver; } public static WebDriver getWebDriver(Page page) { // WebDriver driver = new HtmlUnitDriver(true); // System.setProperty("webdriver.chrome.driver", "D:\\Installs\\Develop\\crawling\\chromedriver.exe"); // WebDriver driver = new ChromeDriver(); System.setProperty("phantomjs.binary.path", "D:\\Installs\\Develop\\crawling\\phantomjs-2.0.0-windows\\bin\\phantomjs.exe"); WebDriver driver = new PhantomJSDriver(); driver.get(page.getUrl()); // JavascriptExecutor js = (JavascriptExecutor) driver; // js.executeScript("function(){}"); return driver; } public static String getPhantomJSDriver(Page page) { Runtime rt = Runtime.getRuntime(); Process process = null; try { process = rt.exec("D:\\Installs\\Develop\\crawling\\phantomjs-2.0.0-windows\\bin\\phantomjs.exe " + "D:\\workspace\\crawlTest1\\src\\crawlTest1\\parser.js " + page.getUrl().trim()); InputStream in = process.getInputStream(); InputStreamReader reader = new InputStreamReader( in, "UTF-8"); BufferedReader br = new BufferedReader(reader); StringBuffer sbf = new StringBuffer(); String tmp = ""; while((tmp = br.readLine())!=null){ sbf.append(tmp); } return sbf.toString(); } catch (IOException e) { e.printStackTrace(); } return null; } }
2.2)这里用了几种方法:HtmlUnitDriver, ChromeDriver, PhantomJSDriver, PhantomJS,參考 http://blog.csdn.net/five3/article/details/19085303。各自之间的优缺点例如以下:
driver类型 | 长处 | 缺点 | 应用 |
真实浏览器driver | 真实模拟用户行为 | 效率、稳定性低 | 兼容性測试 |
HtmlUnit | 速度快 | js引擎不是主流的浏览器支持的 | 包括少量js的页面測试 |
PhantomJS | 速度中等、模拟行为接近真实 | 不能模拟不同/特定浏览器的行为 | 非GUI的功能性測试 |
2.3)用PhantomJSDriver的时候,遇上错误:ClassNotFoundException: org.openqa.selenium.browserlaunchers.Proxies,原因居然是selenium 2.44 的bug。后来通过maven找到phantomjsdriver-1.2.1.jar 才攻克了。
2.4)另外。我还试了PhantomJS 原生调用(也就是不用selenium,直接调用PhantomJS。见上面的方法)。原生要调用JS,这里的parser.js代码例如以下:
system = require(‘system‘) address = system.args[1];//获得命令行第二个參数 接下来会用到 //console.log(‘Loading a web page‘); var page = require(‘webpage‘).create(); var url = address; //console.log(url); page.open(url, function (status) { //Page is loaded! if (status !== ‘success‘) { console.log(‘Unable to post!‘); } else { //此处的打印,是将结果一流的形式output到java中,java通过InputStream能够获取该输出内容 console.log(page.content); } phantom.exit(); });
3)后话
3.1)HtmlUnitDriver + PhantomJSDriver是当前最可靠的动态抓取方案。
3.2)这过程中用到非常多包、exe,遇到非常多的墙~,有须要的朋友能够找我要。
Reference
http://www.ibm.com/developerworks/cn/web/1309_fengyq_seleniumvswebdriver/
http://blog.csdn.net/smilings/article/details/7395509
http://phantomjs.org/download.html
http://blog.csdn.net/five3/article/details/19085303
http://phantomjs.org/quick-start.html
... ...
以上是关于动态网页爬取样例(WebCollector+selenium+phantomjs)的主要内容,如果未能解决你的问题,请参考以下文章
python 爬虫proxy,BeautifulSoup+requests+mysql 爬取样例