Python3之urllib模块
Posted 天真莫离
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Python3之urllib模块相关的知识,希望对你有一定的参考价值。
简介
urllib是python的一个获取url(Uniform Resource Locators,统一资源定位符),可以用来抓取远程的数据。
常用方法
(1)urlopen
urllib.request.urlopen(url, data=None,[timeout,]*,cafile=None,capath=None,cadefault=False,context=None)
urllib.request.urlopen() 可以获取页面,获取页面内容的数据格式为bytes类型,需要进行decode()解码,转换成str类型。
参数说明:
- url : 需要打开的网址
- data : 字典形式,默认为None时是GET方法,data不为空时, urlopen()的提交方式为POST,注意POST提交时,data需要转换为字节;
- timeout : 设置网站访问的超时时间
from urllib import request response = request.urlopen("http://members.3322.org/dyndns/getip") # <http.client.HTTPResponse object at 0x031F63B0> page = response.read() # b\'106.37.169.186\\n\' page = page.decode("utf-8") # \'106.37.169.186\\n\'
# 使用with语句 with request.urlopen("http://members.3322.org/dyndns/getip") as response: page = response.read() print(page.decode("utf-8"))
注意:urllib.request 使用相同的接口来处理所有类型的url,比如:
req = urllib.request.urlopen(\'ftp://example.com/\')
urlopen返回对象提供的方法:
- read(),readline(),readlines(),fileno(),close() : 对HTTPResponse类型数据进行操作
- info() : 返回HTTPMessage对象,表示远程服务器返回的头信息
- getcode() : 返回HTTP状态码,如果是http请求,200请求成功完成,404网页未找到
- geturl(): 返回请求的url
(2)Request
urllib.request.Request(url,data=None,headers={},method=None)
from urllib import request url = r\'http://www.lagou.com/zhaopin/Python/?labelWords=label\' headers = { \'User-Agent\': r\'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) \' r\'Chrome/45.0.2454.85 Safari/537.36 115Browser/6.0.3\', \'Referer\': r\'http://www.lagou.com/zhaopin/Python/?labelWords=label\', \'Connection\': \'keep-alive\' } req = request.Request(url, headers=headers) page = request.urlopen(req).read() page = page.decode(\'utf-8\')
(3)parse.urlencode
urllib.parse.urlencode(query, doseq=False,safe=\'\',encoding=None,errors=None)
urlencode()的主要作用就是将url附上要提交的数据. 对data数据进行编码。
from urllib import request, parse url = r\'http://www.lagou.com/jobs/positionAjax.json?\' headers = { \'User-Agent\': r\'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) \' r\'Chrome/45.0.2454.85 Safari/537.36 115Browser/6.0.3\', \'Referer\': r\'http://www.lagou.com/zhaopin/Python/?labelWords=label\', \'Connection\': \'keep-alive\' } data = { \'first\': \'true\', \'pn\': 1, \'kd\': \'Python\' } data = parse.urlencode(data).encode(\'utf-8\') # 此时data是字节 b\'first=true&pn=1&kd=Python\' ,POST的数据必须是bytes或者iterable of bytes,不能是str,因此需要encode编码 # 经过urlencode转换后的data数据为\'first=true&pn=1&kd=Python\' # 最后提交的url为:http://www.lagou.com/jobs/positionAjax.json?first=true?pn=1?kd=Python req = request.Request(url, headers=headers, data=data) # 此时req : <urllib.request.Request object at 0x02F52A30> page = request.urlopen(req).read() # 此时page是字节: b\'{"success":false,"msg":"\\xe6\\x82\\xa8\\xe6\\x93\\x8d\\xe4\\xbd\\x9c\\xe5\\xa4\\xaa\\xe9\\xa2\\x91\\xe7\\xb9\\x81,\\xe8\\xaf\\xb7\\xe7\\xa8\\x8d\\xe5\\x90\\x8e\\xe5\\x86\\x8d\\xe8\\xae\\xbf\\xe9\\x97\\xae","clientIp":"106.37.169.186"}\\n page = page.decode(\'utf-8\') # 此时page是字符串:"success":false,"msg":"您操作太频繁,请稍后再访问","clientIp":"106.37.169.186"}
(4)代理 request.ProxyHandler(proxies=None)
当需要抓取的网站设置了访问限制,这时就需要用到代理来抓取数据。
from urllib import request, parse data = { \'first\': \'true\', \'pn\': 1, \'kd\': \'Python\' } proxy = request.ProxyHandler({\'http\': \'5.22.195.215:80\'}) # 设置proxy opener = request.build_opener(proxy) # 挂载opener request.install_opener(opener) # 安装opener data = parse.urlencode(data).encode(\'utf-8\') page = opener.open(url, data).read() page = page.decode(\'utf-8\') return page
(5)异常处理
urlopen在不能处理某个响应的时候会抛出URLError, HTTPError是URLError的子类,在遇到HTTP URL的特殊情况时被抛出。异常类来自于 urllib.error 模块。
URLError :
一般来说,URLError被抛出是因为没有网络连接(没有到指定服务器的路径),或者是指定服务器不存在。在这种情况下,抛出的异常将会包含一个‘reason’属性,
这是包含一个错误码和一段错误信息的元组.
req = urllib.request.Request(\'http://www.pretend_server.org\') try: urllib.request.urlopen(req) except urllib.error.URLError as e: print(e.reason) # 输出 (4, \'getaddrinfo failed\')
HTTPError :
每一个来自服务器的HTTP响应都包含一个数字的“状态码”。有时状态码表明服务器不能执行请求。默认的处理程序会为你处理其中的部分响应(比如,如果响应是“重定向”,
要求客户端从一个不同的URL中获取资料,那么urllib将会为你处理这个)。对于那些不能处理的响应,urlopen将会抛出一个HTTPError。
典型的错误包括‘404’(页面未找到),‘403’(请求禁止),和‘401’(请求认证)。
# Table mapping response codes to messages; entries have the # form {code: (shortmessage, longmessage)}. responses = { 100: (\'Continue\', \'Request received, please continue\'), 101: (\'Switching Protocols\', \'Switching to new protocol; obey Upgrade header\'), 200: (\'OK\', \'Request fulfilled, document follows\'), 201: (\'Created\', \'Document created, URL follows\'), 202: (\'Accepted\', \'Request accepted, processing continues off-line\'), 203: (\'Non-Authoritative Information\', \'Request fulfilled from cache\'), 204: (\'No Content\', \'Request fulfilled, nothing follows\'), 205: (\'Reset Content\', \'Clear input form for further input.\'), 206: (\'Partial Content\', \'Partial content follows.\'), 300: (\'Multiple Choices\', \'Object has several resources -- see URI list\'), 301: (\'Moved Permanently\', \'Object moved permanently -- see URI list\'), 302: (\'Found\', \'Object moved temporarily -- see URI list\'), 303: (\'See Other\', \'Object moved -- see Method and URL list\'), 304: (\'Not Modified\', \'Document has not changed since given time\'), 305: (\'Use Proxy\', \'You must use proxy specified in Location to access this \' \'resource.\'), 307: (\'Temporary Redirect\', \'Object moved temporarily -- see URI list\'), 400: (\'Bad Request\', \'Bad request syntax or unsupported method\'), 401: (\'Unauthorized\', \'No permission -- see authorization schemes\'), 402: (\'Payment Required\', \'No payment -- see charging schemes\'), 403: (\'Forbidden\', \'Request forbidden -- authorization will not help\'), 404: (\'Not Found\', \'Nothing matches the given URI\'), 405: (\'Method Not Allowed\', \'Specified method is invalid for this server.\'), 406: (\'Not Acceptable\', \'URI not available in preferred format.\'), 407: (\'Proxy Authentication Required\', \'You must authenticate with \' \'this proxy before proceeding.\'), 408: (\'Request Timeout\', \'Request timed out; try again later.\'), 409: (\'Conflict\', \'Request conflict.\'), 410: (\'Gone\', \'URI no longer exists and has been permanently removed.\'), 411: (\'Length Required\', \'Client must specify Content-Length.\'), 412: (\'Precondition Failed\', \'Precondition in headers is false.\'), 413: (\'Request Entity Too Large\', \'Entity is too large.\'), 414: (\'Request-URI Too Long\', \'URI is too long.\'), 415: (\'Unsupported Media Type\', \'Entity body in unsupported format.\'), 416: (\'Requested Range Not Satisfiable\', \'Cannot satisfy request range.\'), 417: (\'Expectation Failed\', \'Expect condition could not be satisfied.\'), 500: (\'Internal Server Error\', \'Server got itself in trouble\'), 501: (\'Not Implemented\', \'Server does not support this operation\'), 502: (\'Bad Gateway\', \'Invalid responses from another server/proxy.\'), 503: (\'Service Unavailable\', \'The server cannot process the request due to a high load\'), 504: (\'Gateway Timeout\', \'The gateway server did not receive a timely response\'), 505: (\'HTTP Version Not Supported\', \'Cannot fulfill request.\'), }
异常处理方式:
req = urllib.request.Request(\'http://www.python.org/fish.html\') try: urllib.request.urlopen(req) except urllib.error.HTTPError as e: print (e.code) print (e.info()) print (e.geturl()) print (e.read())
或者:
from urllib.request import Request, urlopen from urllib.error import URLError req = Request(someurl) try: response = urlopen(req) except URLError as e: if hasattr(e, \'reason\'): print(\'We failed to reach a server.\') print(\'Reason: \', e.reason) elif hasattr(e, \'code\'): print(\'The server couldn\\\'t fulfill the request.\') print(\'Error code: \', e.code)
以上是关于Python3之urllib模块的主要内容,如果未能解决你的问题,请参考以下文章
Python 爬虫之urllib库,及urllib库的4个模块基本使用和了解