apache_conf 使用和配置Google Ajax Crawler以方便搜索引擎索引ajax富页面(客户端MVC和Google Ajax Crawli)

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了apache_conf 使用和配置Google Ajax Crawler以方便搜索引擎索引ajax富页面(客户端MVC和Google Ajax Crawli)相关的知识,希望对你有一定的参考价值。

<!--
To view this page's markup as a search engine would see it without the google_ajax_crawler gem, open in a browser and
view source...

To see how the google_ajax_crawler gem delivers a rendered snapshot of the page, open /?_escaped_fragment_=test
-->

<html>
<head></head>
<body>
  <h1>A Simple State Test</h1>
  <!-- the url fragment (i.e. /#!something) will be rendered via JS in the span-->
  <p>State: <span id='page_state'></span></p>
  
  <!-- will be removed by js on page load -->
  <div class='loading' id='loading'>Loading....</div>

  <script type='text/javascript'>

  var init = function() {
    var writeHash = function() {
      document.getElementById('page_state').innerHTML = "Javascript rendering complete for client-side route " + document.location.hash;
      var loadingMask = document.getElementById('loading');
      if(loadingMask) loadingMask.parentNode.removeChild(loadingMask);
      console.log('done...');
    };

    window.addEventListener("hashchange", writeHash, false);
    setTimeout(writeHash, 500);
  };

  //
  // Only execute js if loading the page using an unescaped url
  //
  if(/#.*$/.test(document.location.href)) init();

  </script>
</body>
</html>
#
# to run:
# $ rackup config.ru -p 3000
# open browser to http://localhost:3000/#!test
#
require 'bundler/setup'
require './lib/google_ajax_crawler'

use GoogleAjaxCrawler::Crawler do |config|
  config.driver = GoogleAjaxCrawler::Drivers::CapybaraWebkit
  config.poll_interval    = 0.25 # how often to check if the page has loaded

  #
  # for the demo - the page is considered loaded when the loading mask has been removed from the DOM
  # this could evaluate something like $.active == 0 to ensure no jquery ajax calls are pending
  #
  config.page_loaded_test = lambda {|driver| driver.page.evaluate_script('document.getElementById("loading") == null') }
end

# a sample page using #! url fragments to seed page state
page_content = File.read('./page.html')
run lambda {|env| [200, { 'Content-Type' => 'text/html' }, [page_content]] }

以上是关于apache_conf 使用和配置Google Ajax Crawler以方便搜索引擎索引ajax富页面(客户端MVC和Google Ajax Crawli)的主要内容,如果未能解决你的问题,请参考以下文章

apache_conf Mailchimp URLS + Google AdWord URL修复。

apache_conf 来自https://snipt.net/public/ .htaccess HTTP缓存(Google PageSpeed Friendly)

apache_conf 使用rails,docker为dev和test配置postgres数据库的步骤

apache_conf 使用nconf管理分层配置

apache_conf .htaccess更好的Google页面速​​度结果规则 - 这是我添加到任何我希望显着增加的网站的一组默认规则

apache_conf Gulp Receipe:使用外部配置文件