写给我无聊看的,python爬取CSDN博客标题和摘要出现的最多字,我都不知道我想干什么
Posted 杨旭华啊
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了写给我无聊看的,python爬取CSDN博客标题和摘要出现的最多字,我都不知道我想干什么相关的知识,希望对你有一定的参考价值。
一、分析网页
这个网站是动态加载的数据,我们不墨迹,直接抓包
- https://blog.csdn.net/community/home-api/v1/get-business-list?page=1&size=20&businessType=lately&noMore=false&username=Yxh666
- https://blog.csdn.net/community/home-api/v1/get-business-list?page=2&size=20&businessType=lately&noMore=false&username=Yxh666
- https://blog.csdn.net/community/home-api/v1/get-business-list?page=3&size=20&businessType=lately&noMore=false&username=Yxh666
注意到这个URL地址是只有 page 是变化的,那我们直接更换页数就可抓取内容
二、技术点
-
用到的技术
requests、jieba、wordcloud、matplotlib、numpy、PIL -
学习侧重点
对于jieba和wordcloud模块的使用,以及对动态加载网页获取数据
三、代码编写
- 爬虫代码:
import requests
import time
import random
class CsdnSpider:
def __init__(self):
self.url = 'https://blog.csdn.net/community/home-api/v1/get-business-list?page={}&size=20&businessType=blog&orderby=&noMore=false&username=Yxh666'
# 写入txt文件
self.f = open('bolg.txt', 'w', encoding='utf8')
def get_html(self, url):
headers = {
'user-agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.106 Safari/537.36',
}
_html = requests.get(url=url, headers=headers).json()
self.parse_html(_html)
def parse_html(self, _html):
result_data = _html['data']['list']
for res in result_data:
item = {}
item['title'] = res['title']
item['description'] = res['description'].replace("\\t", "").replace(' ', '').replace('\\xa0', '').replace(
'\\u2003', '').replace('...', '') # 去除一些换行之类的符号
item['diggCount'] = res['diggCount']
item['commentCount'] = res['commentCount']
item['viewCount'] = res['viewCount']
item['url'] = res['url']
print(item)
self.f.write(item['title'])
self.f.write(item['description'])
self.f.write("\\n")
def run(self):
for i in range(1, 6):
page_url = self.url.format(i)
self.get_html(url=page_url)
time.sleep(random.randint(2, 4))
self.f.close()
if __name__ == '__main__':
spider = CsdnSpider()
spider.run()
生成词云代码:
pip install jieba
pip install wordcloud
import jieba
from matplotlib import pyplot as plt
import wordcloud as wc
from PIL import Image
import numpy as np
with open("bolg.txt", "r", encoding="utf-8") as f:
content = f.read()
res = jieba.lcut(content)
text = " ".join(res)
# 读取图片,用图片展示
mask = np.array(Image.open("logo.jpeg"))
# SimHei.ttf 为字体,要显示中文必须有字体,没有的话去下载一下
word_cloud = wc.WordCloud(font_path="SimHei.ttf", mask=mask)
word_cloud.generate(text)
# 保存云词图片
word_cloud.to_file("blog.png")
plt.imshow(word_cloud)
plt.show()
print(text)
天啊,我这都什么乱七八糟的,快看看们的什么样子啊
以上是关于写给我无聊看的,python爬取CSDN博客标题和摘要出现的最多字,我都不知道我想干什么的主要内容,如果未能解决你的问题,请参考以下文章
Python-selenium翻页爬取csdn博客保存数据入mysql