一个可以获取知乎timeline的爬虫

Posted 安阳小栈-客官歇会吧

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了一个可以获取知乎timeline的爬虫相关的知识,希望对你有一定的参考价值。

# -*- coding: utf-8 -*-
import requests
import lxml
import os,time
from bs4 import BeautifulSoup as sb
try:
    import cookielib

except:
    import http.cookiejar as cookielib
import json

headers = {
        "Host": "www.zhihu.com",
        "Accept-Language":"zh-CN,zh;q=0.8",
        "accept":"application/json, text/plain, */*",
        "Referer": "https://www.zhihu.com/",
        "Connection":"keep-alive",
        User-Agent: Mozilla/5.0 (Linux; android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (Khtml, like Gecko) Chrome/56.0.2924.87 Mobile Safari/537.36,
        "authorization" : "Bearer Mi4xUXJGd0FBQUFBQUFBa0VKNTBfbnVDeGNBQUFCaEFsVk5OQmZMV1FCVnQ3aEhfeUVsUElGN1Zrd3RSSWpMdHI0ZG5B|1503889972|a235d0e24d646c5df6b1f667abc005381c273870"
    }

def get_session():
    session = requests.session()
    session.cookies = cookielib.LWPCookieJar(filename="cookies")
    try:
        session.cookies.load()
        print("cookie 加载成功!")
    except:
        print("cookie 无法加载...")
    return session

session = get_session()

data = {"action":"True",
        "limit":"10",
        "session_token":"c9c3581148b6d633275ba5d4412d3bd8",
        "action":"down",
        "after_id":"0",
        "desktop":"true"
        }

def get_data():
    res = session.get("https://www.zhihu.com/api/v3/feed/topstory", data=data, headers=headers)
    json = res.json()
    global count
    for i in json[data]:
        try:
            print(i[target][question][title])
        except:
            print(没有问题了+str(i))
        try:
            print(i[target][content])
        except:
            print(找不到答案了+str(i))
        count += 1
        print()
count = 0
for n in range(5):
    data["after_id"] = n*10
    get_data()
    time.sleep(3)


print(count)

 

以上是关于一个可以获取知乎timeline的爬虫的主要内容,如果未能解决你的问题,请参考以下文章

scrapy 知乎用户信息爬虫

Python爬取知乎与我所理解的爬虫与反爬虫

怎样用Python设计一个爬虫模拟登陆知乎

怎样用Python设计一个爬虫模拟登陆知乎

知乎爬虫(基于selenium)

用WebCollector制作一个爬取《知乎》并进行问题精准抽取的爬虫(JAVA)