使用余弦定理计算两篇文章的相似性
Posted xitingxie
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了使用余弦定理计算两篇文章的相似性相关的知识,希望对你有一定的参考价值。
使用余弦定理计算两篇文章的相似性:(方法论,细致易懂版)
http://blog.csdn.net/dearwind153/article/details/52316151
python 实现(代码):
http://outofmemory.cn/code-snippet/35172/match-text-release
(结巴分词下载及安装:http://www.cnblogs.com/kaituorensheng/p/3595879.html)
java 实现(代码+方法描述):
https://my.oschina.net/leejun2005/blog/116291
(以上是我参考的资料)
-----------------------------------------------------------------------------------------------------------------------------------------------
我用的是Python实现的,需安装结巴分词的Python包
代码如下:
#!/usr/bin/env python # -*- coding: utf-8 -* import re from math import sqrt #You have to install the python lib import jieba def file_reader(filename,filename2): file_words = {} ignore_list = [u\'的\',u\'了\',u\'和\',u\'呢\',u\'啊\',u\'哦\',u\'恩\',u\'嗯\',u\'吧\']; accepted_chars = re.compile("[\\\\u4E00-\\\\u9FA5]+") file_object = open(filename) try: all_the_text = file_object.read() seg_list = jieba.cut(all_the_text, cut_all=True) #print "/ ".join(seg_list) for s in seg_list: if accepted_chars.match(s) and s not in ignore_list: if s not in file_words.keys(): file_words[s] = [1,0] else: file_words[s][0] += 1 finally: file_object.close() file_object2 = open(filename2) try: all_the_text = file_object2.read() seg_list = jieba.cut(all_the_text, cut_all=True) for s in seg_list: if accepted_chars.match(s) and s not in ignore_list: if s not in file_words.keys(): file_words[s] = [0,1] else: file_words[s][1] += 1 finally: file_object2.close() sum_2 = 0 sum_file1 = 0 sum_file2 = 0 for word in file_words.values(): sum_2 += word[0]*word[1] sum_file1 += word[0]**2 sum_file2 += word[1]**2 rate = sum_2/(sqrt(sum_file1*sum_file2)) print(\'rate: \') print(rate) file_reader(\'thefile.txt\',\'thefile2.txt\') #该片段来自于http://outofmemory.cn
以上是关于使用余弦定理计算两篇文章的相似性的主要内容,如果未能解决你的问题,请参考以下文章