web crawling(plus7) scrapy1 commands)

Posted 兔子的尾巴_Mini

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了web crawling(plus7) scrapy1 commands)相关的知识,希望对你有一定的参考价值。

Available commands:
bench Run quick benchmark test
fetch Fetch a URL using the Scrapy downloader
genspider Generate new spider using pre-defined templates
runspider Run a self-contained spider (without creating a project)
settings Get settings values
shell Interactive scraping console
startproject Create new project
version Print Scrapy version
view Open URL in browser, as seen by Scrapy

 

scrapy fetch [options] <url>

Fetch a URL using the Scrapy downloader and print its content to stdout. You
may want to use --nolog to disable logging

Options
=======
--help, -h show this help message and exit
--spider=SPIDER use this spider
--headers print response HTTP headers instead of body
--no-redirect do not handle HTTP 3xx status codes and print response
as-is

Global Options
--------------
--logfile=FILE log file. if omitted stderr will be used
--loglevel=LEVEL, -L LEVEL
log level (default: DEBUG)
--nolog disable logging completely
--profile=FILE write python cProfile stats to FILE
--pidfile=FILE write process ID to FILE
--set=NAME=VALUE, -s NAME=VALUE
set/override setting (may be repeated)
--pdb enable pdb on failure

 

runspider

 

scrapy shell url  --nolog

In[1]:

 

scrapy startproject project_name

 

scrapy version

 

scrapy view(download a website & view it with browser)

eg: scrapy view url

 

project commands:

 


Available commands:
bench Run quick benchmark test
check Check spider contracts
crawl Run a spider
edit Edit spider
fetch Fetch a URL using the Scrapy downloader
genspider Generate new spider using pre-defined templates
list List available spiders
parse Parse URL (using its spider) and print the results
runspider Run a self-contained spider (without creating a project)
settings Get settings values
shell Interactive scraping console
startproject Create new project
version Print Scrapy version
view Open URL in browser, as seen by Scrapy

 


E:\m\f1>scrapy genspider -l
Available templates:
basic
crawl
csvfeed
xmlfeed

 


E:\m\f1>scrapy genspider -t basic spider baidu.com
Created spider ‘spider‘ using template ‘basic‘ in module:
f1.spiders.spider

E:\m\f1>scrapy check spider

----------------------------------------------------------------------
Ran 0 contracts in 0.000s

OK

E:\m\f1>scrapy crawl spider

 

E:\m\f1>scrapy list
spider

E:\m\f1>scrapy edit spider(linux is fine)

E:\m\f1>scrapy parse www.baidu.com

 























































以上是关于web crawling(plus7) scrapy1 commands)的主要内容,如果未能解决你的问题,请参考以下文章

web crawling(plus5) crawling wechat

web crawling(plus5) news crawling and proxy

web crawling

web crawling(plus1)

web crawling(plus10)scrapy 4

web crawling(plus3) errors solution