【问题】
通过Scrapy创建好了项目:
1
|
E:\Dev_Root\python\Scrapy>scrapy startproject songtaste |
运行项目,结果出错:
1
2
3
4
5
6
|
E:\Dev_Root\python\Scrapy>scrapy crawl songtaste -t json -o h1user.json Scrapy 0.16.2 - no active project Unknown command : crawl Use "scrapy" to see available commands |
【解决过程】
1.参考:
Trying to get Scrapy into a project to run Crawl command
然后才知道,原来是需要切换到对应的项目所在文件夹,再运行crawl,才可以的。
2.所以去,切换到项目所在目录songtaste子文件夹中,再运行,就可以了:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
|
E:\Dev_Root\python\Scrapy> cd songtaste E:\Dev_Root\python\Scrapy\songtaste>scrapy crawl songtaste -t json -o h1user.json 2012-11-12 22:39:13+0800 [scrapy] INFO: Scrapy 0.16.2 started (bot: songtaste) 2012-11-12 22:39:13+0800 [scrapy] DEBUG: Enabled extensions: FeedExporter, LogStats, TelnetConsole, CloseSpider, WebServ ice, CoreStats, SpiderState 2012-11-12 22:39:14+0800 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMi ddleware, ChunkedTransferMiddleware, DownloaderStats 2012-11-12 22:39:14+0800 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMidd leware, UrlLengthMiddleware, DepthMiddleware 2012-11-12 22:39:14+0800 [scrapy] DEBUG: Enabled item pipelines: 2012-11-12 22:39:14+0800 [songtaste] INFO: Spider opened 2012-11-12 22:39:14+0800 [songtaste] INFO: Crawled 0 pages (at 0 pages /min ), scraped 0 items (at 0 items /min ) 2012-11-12 22:39:14+0800 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023 2012-11-12 22:39:14+0800 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080 2012-11-12 22:39:14+0800 [songtaste] DEBUG: Crawled (200) <GET http: //www .songtaste.com /user/351979/ > (referer: None) 2012-11-12 22:39:14+0800 [songtaste] DEBUG: Scraped from <200 http: //www .songtaste.com /user/351979/ > { ‘h1user‘ : [u ‘crifan‘ ]} 2012-11-12 22:39:14+0800 [songtaste] INFO: Closing spider (finished) 2012-11-12 22:39:14+0800 [songtaste] INFO: Stored json feed (1 items) in : h1user.json 2012-11-12 22:39:14+0800 [songtaste] INFO: Dumping Scrapy stats: { ‘downloader/request_bytes‘ : 235, ‘downloader/request_count‘ : 1, ‘downloader/request_method_count/GET‘ : 1, ‘downloader/response_bytes‘ : 11058, ‘downloader/response_count‘ : 1, ‘downloader/response_status_count/200‘ : 1, ‘finish_reason‘ : ‘finished‘ , ‘finish_time‘ : datetime.datetime(2012, 11, 12, 14, 39, 14, 431000), ‘item_scraped_count‘ : 1, ‘log_count/DEBUG‘ : 8, ‘log_count/INFO‘ : 5, ‘response_received_count‘ : 1, ‘scheduler/dequeued‘ : 1, ‘scheduler/dequeued/memory‘ : 1, ‘scheduler/enqueued‘ : 1, ‘scheduler/enqueued/memory‘ : 1, ‘start_time‘ : datetime.datetime(2012, 11, 12, 14, 39, 14, 275000)} 2012-11-12 22:39:14+0800 [songtaste] INFO: Spider closed (finished) |
【总结】
虽然此错误,很低级,但是,由于新接触Scrapy,由于不太熟悉,所以还是很容易犯的。
转载请注明:在路上 ? 【已解决】Scrapy运行项目时出错:Scrapy 0.16.2 – no active project,Unknown command: crawl,Use "scrapy" to see available commands