python爬虫-基础入门-python爬虫突破封锁
Posted 公子缘
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了python爬虫-基础入门-python爬虫突破封锁相关的知识,希望对你有一定的参考价值。
python爬虫-基础入门-python爬虫突破封锁
>> 相关概念
>> request概念:是从客户端向服务器发出请求,包括用户提交的信息及客户端的一些信息。客户端可通过HTML表单或在网页地址后面提供参数的方法提交数据。让后通过request对象的相关方法来获取这些数据。request的各种方法主要用来处理客户端浏览器提交的请求中的各项参数和选项。而python爬虫中的request其实就是通过python向服务器发出request请求,得到其返回的信息。
>> post 和 get数据传输:
> 常见的http请求方法有get、post、put、delete等
> get是比较简单的http请求,直接会将发送给web服务器的数据放在请求地址后面,即在请求地址后面使用 ?key1=value&key2=value2形式传递数据,只适合数据量少,且没有安全性要求的请求。
> post是将需要发送给web服务器的数据经过编码放到请求体中,可以传递大量数据,并且有一定的安全性,常用于表单提交
>> 构造合理的HTTP请求
> 有些网站不会同意程序直接用上面的方式进行访问,如果识别有问题,那么站点根本不会响应,所以为了完全模拟浏览器的工作,需要设置一些Headers Http的请求头的信息。
> HTTP请求头是在每次向网络服务器发送请求 时,传递的一组属性和配置信息。HTTP定义了十几种古怪的请求头类型,不过大多数的不常用。只有下面的七个字段被大多数浏览器用来初始化所有网络请求
属性 | 内容 |
Host | |
Connection | 默认进行持久链接alive,clos标明当前正在使用tcp链接在当天请求处理完毕后会被断掉 |
Accept | 代表浏览器可以接受服务器回发的内容类型 |
User-Agent | 向访问网站提供你所使用的浏览器类型、操作系统及版本、CPU类型、浏览器渲染引擎、浏览器语音、浏览器插件等信息的标识 |
Referrer | |
Accept-Encoding | |
Accept-Language | 浏览器可 接受的语言 |
>> 简单示例:
1 #-*- coding: utf-8 -*- 2 3 import urllib.request 4 5 def baiduNet() : 6 headers = { 7 "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.186 Safari/537.36", 8 \'Connection\': \'keep-alive\' 9 } 10 request = urllib.request.Request("http://www.baidu.com", headers=headers) 11 response = urllib.request.urlopen(request).read() 12 netcontext = response.decode("utf-8") 13 14 file = open("baidutext.txt", "w", encoding=\'UTF-8\') 15 file.write(netcontext) 16 17 if __name__ == "__main__" : 18 baiduNet()
>> 示例升级:
1 #-*- coding: utf-8 -*- 2 3 import urllib.request 4 import random 5 6 def requests_headers(): 7 head_connection = [\'Keep-Alive\',\'close\'] 8 head_accept = [\'text/html,application/xhtml+xml,*/*\'] 9 head_accept_language = [\'zh-CN,fr-FR;q=0.5\',\'en-US,en;q=0.8,zh-Hans-CN;q=0.5,zh-Hans;q=0.3\'] 10 head_user_agent = [\'Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko\', 11 \'Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.95 Safari/537.36\', 12 \'Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; rv:11.0) like Gecko)\', 13 \'Mozilla/5.0 (Windows; U; Windows NT 5.2) Gecko/2008070208 Firefox/3.0.1\', 14 \'Mozilla/5.0 (Windows; U; Windows NT 5.1) Gecko/20070309 Firefox/2.0.0.3\', 15 \'Mozilla/5.0 (Windows; U; Windows NT 5.1) Gecko/20070803 Firefox/1.5.0.12\', 16 \'Opera/9.27 (Windows NT 5.2; U; zh-cn)\', 17 \'Mozilla/5.0 (Macintosh; PPC Mac OS X; U; en) Opera 8.0\', 18 \'Opera/8.0 (Macintosh; PPC Mac OS X; U; en)\', 19 \'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.12) Gecko/20080219 Firefox/2.0.0.12 Navigator/9.0.0.6\', 20 \'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Win64; x64; Trident/4.0)\', 21 \'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0)\', 22 \'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E)\', 23 \'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Maxthon/4.0.6.2000 Chrome/26.0.1410.43 Safari/537.1 \', 24 \'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E; QQBrowser/7.3.9825.400)\', 25 \'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0 \', 26 \'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.92 Safari/537.1 LBBROWSER\', 27 \'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0; BIDUBrowser 2.x)\', 28 \'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.11 TaoBrowser/3.0 Safari/536.11\'] 29 30 #header 为常用属性随机产生值 31 header = { 32 \'Connection\':head_connection[random.randrange(0,len(head_connection))], 33 \'Accept\':head_accept[0], 34 \'Accept-Language\':head_accept_language[random.randrange(0,len(head_accept_language))], 35 \'User-Agent\':head_user_agent[random.randrange(0,len(head_user_agent))], 36 } 37 return header #返回值为 header这个字典 38 39 40 def baiduNet() : 41 headers = requests_headers() 42 request = urllib.request.Request("http://www.baidu.com", headers=headers) 43 response = urllib.request.urlopen(request).read() 44 netcontext = response.decode("utf-8") 45 46 file = open("baidutext.txt", "w", encoding=\'UTF-8\') 47 file.write(netcontext) 48 49 if __name__ == "__main__" : 50 baiduNet()
>> 由于一直用同一个IP爬取目标网站的数据,如果访问的次数过多,目标网站服务器会禁止你的访问,所以需要经常更换自己的IP,这时候就需要代理服务器了。
》》示例代码:
1 #-*- coding: utf-8 -*- 2 3 import urllib.request 4 import random 5 6 def requests_headers(): 7 head_connection = [\'Keep-Alive\',\'close\'] 8 head_accept = [\'text/html,application/xhtml+xml,*/*\'] 9 head_accept_language = [\'zh-CN,fr-FR;q=0.5\',\'en-US,en;q=0.8,zh-Hans-CN;q=0.5,zh-Hans;q=0.3\'] 10 head_user_agent = [\'Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko\', 11 \'Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.95 Safari/537.36\', 12 \'Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; rv:11.0) like Gecko)\', 13 \'Mozilla/5.0 (Windows; U; Windows NT 5.2) Gecko/2008070208 Firefox/3.0.1\', 14 \'Mozilla/5.0 (Windows; U; Windows NT 5.1) Gecko/20070309 Firefox/2.0.0.3\', 15 \'Mozilla/5.0 (Windows; U; Windows NT 5.1) Gecko/20070803 Firefox/1.5.0.12\', 16 \'Opera/9.27 (Windows NT 5.2; U; zh-cn)\', 17 \'Mozilla/5.0 (Macintosh; PPC Mac OS X; U; en) Opera 8.0\', 18 \'Opera/8.0 (Macintosh; PPC Mac OS X; U; en)\', 19 \'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.12) Gecko/20080219 Firefox/2.0.0.12 Navigator/9.0.0.6\', 20 \'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Win64; x64; Trident/4.0)\', 21 \'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0)\', 22 \'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E)\', 23 \'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Maxthon/4.0.6.2000 Chrome/26.0.1410.43 Safari/537.1 \', 24 \'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E; QQBrowser/7.3.9825.400)\', 25 \'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0 \', 26 \'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.92 Safari/537.1 LBBROWSER\', 27 \'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0; BIDUBrowser 2.x)\', 28 \'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.11 TaoBrowser/3.0 Safari/536.11\'] 29 30 #header 为常用属性随机产生值 31 header = { 32 \'Connection\':head_connection[random.randrange(0,len(head_connection))], 33 \'Accept\':head_accept[0], 34 \'Accept-Language\':head_accept_language[random.randrange(0,len(head_accept_language))], 35 \'User-Agent\':head_user_agent[random.randrange(0,len(head_user_agent))], 36 } 37 return header #返回值为 header这个字典 38 39 def baiduNetProxy(): 40 41 headers = requests_headers() 42 proxies = ["代理ip地址:代理端口" ] 43 # 生产代理服务器 44 proxy_handler = urllib.request.ProxyHandler({"http":random.choice(proxies)}) 45 # 创建支持处理http请求的对象 46 opener = urllib.request.build_opener(proxy_handler) 47 header = [] 48 49 for key, value in headers.items(): 50 elem = (key, value) 51 header.append(elem) 52 opener.addheaders = header # 添加headers 53 54 request = opener.open("http://www.baidu.com") 55 response = request.read() 56 netcontext = response.decode("utf-8") 57 58 file = open("baidutext.txt", "w", encoding=\'UTF-8\') 59 file.write(netcontext) 60 61 if __name__ == "__main__" : 62 baiduNetProxy()
如有问题,欢迎纠正!!!
如有转载,请标明源处:https://www.cnblogs.com/Charles-Yuan/p/9903489.html
以上是关于python爬虫-基础入门-python爬虫突破封锁的主要内容,如果未能解决你的问题,请参考以下文章