崔老师爬取top100的源码(会403)

Posted xlsxls

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了崔老师爬取top100的源码(会403)相关的知识,希望对你有一定的参考价值。

import json
from multiprocessing import Pool
import requests
from requests.exceptions import RequestException
import re

def get_one_page(url):
    try:
        response = requests.get(url)
        if response.status_code == 200:
            return response.text
        return None
    except RequestException:
        return None

def parse_one_page(html):
    pattern = re.compile(<dd>.*?board-index.*?>(d+)</i>.*?data-src="(.*?)".*?name"><a
                         +.*?>(.*?)</a>.*?star">(.*?)</p>.*?releasetime">(.*?)</p>
                         +.*?integer">(.*?)</i>.*?fraction">(.*?)</i>.*?</dd>, re.S)
    items = re.findall(pattern, html)
    for item in items:
        yield {
            index: item[0],
            image: item[1],
            title: item[2],
            actor: item[3].strip()[3:],
            time: item[4].strip()[5:],
            score: item[5]+item[6]
        }

def write_to_file(content):
    with open(result.txt, a, encoding=utf-8) as f:
        f.write(json.dumps(content, ensure_ascii=False) + 
)
        f.close()

def main(offset):
    url = http://maoyan.com/board/4?offset= + str(offset)
    html = get_one_page(url)
    for item in parse_one_page(html):
        print(item)
        write_to_file(item)


if __name__ == __main__:
    pool = Pool()
    pool.map(main, [i*10 for i in range(10)])
    pool.close()
    pool.join()

 

以上是关于崔老师爬取top100的源码(会403)的主要内容,如果未能解决你的问题,请参考以下文章

# [爬虫Demo] pyquery+csv爬取猫眼电影top100

Python知乎热门话题爬取

爬虫实战01——爬取猫眼电影top100榜单

网站爬取-案例一:猫眼电影TOP100

python爬取猫眼电影top100排行榜

python爬取猫眼电影top100