Python抓取网页数据的终极办法!你值得拥有!

Posted py147

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Python抓取网页数据的终极办法!你值得拥有!相关的知识,希望对你有一定的参考价值。

假设你在网上搜索某个项目所需的原始数据,但坏消息是数据存在于网页中,并且没有可用于获取原始数据的API。这时,你可以这样解决——

技术分享图片

 

import pandas as pdtables = pd.read_html("https://apps.sandiego.gov/sdfiredispatch/")print(tables[0])

就这么简单! Pandas可以在页面上找到所有重要的html表,并将它们作为一个新的DataFrame对象返回。

输入表格0行有列标题,并要求它将基于文本的日期转换为时间对象:

import pandas as pdcalls_df, = pd.read_html("http://apps.sandiego.gov/sdfiredispatch/", header=0, parse_dates=["Call Date"])print(calls_df)

得到:

Call Date Call Type Street Cross Streets Unit 2017-06-02 17:27:58 Medical HIGHLAND AV WIGHTMAN ST/UNIVERSITY AV E17 2017-06-02 17:27:58 Medical HIGHLAND AV WIGHTMAN ST/UNIVERSITY AV M34 2017-06-02 17:23:51 Medical EMERSON ST LOCUST ST/EVERGREEN ST E22 2017-06-02 17:23:51 Medical EMERSON ST LOCUST ST/EVERGREEN ST M47 2017-06-02 17:23:15 Medical MARAUDER WY BARON LN/FROBISHER ST E38 2017-06-02 17:23:15 Medical MARAUDER WY BARON LN/FROBISHER ST M41

这只是一行代码,数据不能作为json记录可用。

import pandas as pdcalls_df, = pd.read_html("http://apps.sandiego.gov/sdfiredispatch/", header=0, parse_dates=["Call Date"])print(calls_df.to_json(orient="records", date_format="iso"))

运行下面的代码你将得到一个漂亮的json输出(即使有适当的ISO 8601日期格式):

[ { "Call Date": "2017-06-02T17:34:00.000Z", "Call Type": "Medical", "Street": "ROSECRANS ST", "Cross Streets": "HANCOCK ST/ALLEY", "Unit": "M21" }, { "Call Date": "2017-06-02T17:34:00.000Z", "Call Type": "Medical", "Street": "ROSECRANS ST", "Cross Streets": "HANCOCK ST/ALLEY", "Unit": "T20" }, { "Call Date": "2017-06-02T17:30:34.000Z", "Call Type": "Medical", "Street": "SPORTS ARENA BL", "Cross Streets": "CAM DEL RIO WEST/EAST DR", "Unit": "E20" } // etc...]

你甚至可以将数据保存到CSV或XLS文件中:

import pandas as pdcalls_df, = pd.read_html("http://apps.sandiego.gov/sdfiredispatch/", header=0, parse_dates=["Call Date"])calls_df.to_csv("calls.csv", index=False)

运行并双击calls.csv在电子表格中打开:

技术分享图片

 

当然,Pandas还可以更简单地对数据进行过滤,分类或处理:

>>> calls_df.describe() Call Date Call Type Street Cross Streets Unitcount 69 69 69 64 69unique 29 2 29 27 60top 2017-06-02 16:59:50 Medical CHANNEL WY LA SALLE ST/WESTERN ST E1freq 5 66 5 5 2first 2017-06-02 16:36:46 NaN NaN NaN NaNlast 2017-06-02 17:41:30 NaN NaN NaN NaN>>> calls_df.groupby("Call Type").count() Call Date Street Cross Streets UnitCall TypeMedical 66 66 61 66Traffic Accident (L1) 3 3 3 3>>> calls_df["Unit"].unique()array([‘E46‘, ‘MR33‘, ‘T40‘, ‘E201‘, ‘M6‘, ‘E34‘, ‘M34‘, ‘E29‘, ‘M30‘, ‘M43‘, ‘M21‘, ‘T20‘, ‘E20‘, ‘M20‘, ‘E26‘, ‘M32‘, ‘SQ55‘, ‘E1‘, ‘M26‘, ‘BLS4‘, ‘E17‘, ‘E22‘, ‘M47‘, ‘E38‘, ‘M41‘, ‘E5‘, ‘M19‘, ‘E28‘, ‘M1‘, ‘E42‘, ‘M42‘, ‘E23‘, ‘MR9‘, ‘PD‘, ‘LCCNOT‘, ‘M52‘, ‘E45‘, ‘M12‘, ‘E40‘, ‘MR40‘, ‘M45‘, ‘T1‘, ‘M23‘, ‘E14‘, ‘M2‘, ‘E39‘, ‘M25‘, ‘E8‘, ‘M17‘, ‘E4‘, ‘M22‘, ‘M37‘, ‘E7‘, ‘M31‘, ‘E9‘, ‘M39‘, ‘SQ56‘, ‘E10‘, ‘M44‘, ‘M11‘], dtype=object)

进群:125240963  即可获取源码

以上是关于Python抓取网页数据的终极办法!你值得拥有!的主要内容,如果未能解决你的问题,请参考以下文章

闲来无趣,用python做一款属于自己的翻译词典软件吧,告别网页搜索,你值得拥有!!

python爬虫使用requests请求无法获取网页元素时终极解决方案

用Python 抓取的UTF8网页无法decode('utf-8')

如何利用python读取网页中变量的内容

终极办法org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘helloC(代

终极办法org.springframework.beans.factory.BeanCreationException: Error creating bean with name ‘helloC(代