主题分布:在python中做LDA后如何查看哪个文档属于哪个主题

Posted

技术标签:

【中文标题】主题分布:在python中做LDA后如何查看哪个文档属于哪个主题【英文标题】:Topic distribution: How do we see which document belong to which topic after doing LDA in python 【发布时间】:2014-01-25 21:58:17 【问题描述】:

我能够从 gensim 运行 LDA 代码,并获得前 10 个主题及其各自的关键字。

现在我想进一步了解 LDA 算法的准确度,通过查看它们将哪个文档聚集到每个主题中。这在 gensim LDA 中可行吗?

基本上我想做这样的事情,但是在 python 中并使用 gensim。

LDA with topicmodels, how can I see which topics different documents belong to?

【问题讨论】:

gensim 是一个很酷很简单的库。开发人员 Radim 也是一个很好地了解他的图书馆的人。您需要按主题对文档进行聚类的东西吗? 【参考方案1】:

使用主题的概率,您可以尝试设置一些阈值并将其用作聚类基线,但我相信有比这种“hacky”方法更好的聚类方法。

from gensim import corpora, models, similarities
from itertools import chain

""" DEMO """
documents = ["Human machine interface for lab abc computer applications",
             "A survey of user opinion of computer system response time",
             "The EPS user interface management system",
             "System and human system engineering testing of EPS",
             "Relation of user perceived response time to error measurement",
             "The generation of random binary unordered trees",
             "The intersection graph of paths in trees",
             "Graph minors IV Widths of trees and well quasi ordering",
             "Graph minors A survey"]

# remove common words and tokenize
stoplist = set('for a of the and to in'.split())
texts = [[word for word in document.lower().split() if word not in stoplist]
         for document in documents]

# remove words that appear only once
all_tokens = sum(texts, [])
tokens_once = set(word for word in set(all_tokens) if all_tokens.count(word) == 1)
texts = [[word for word in text if word not in tokens_once] for text in texts]

# Create Dictionary.
id2word = corpora.Dictionary(texts)
# Creates the Bag of Word corpus.
mm = [id2word.doc2bow(text) for text in texts]

# Trains the LDA models.
lda = models.ldamodel.LdaModel(corpus=mm, id2word=id2word, num_topics=3, \
                               update_every=1, chunksize=10000, passes=1)

# Prints the topics.
for top in lda.print_topics():
  print top
print

# Assigns the topics to the documents in corpus
lda_corpus = lda[mm]

# Find the threshold, let's set the threshold to be 1/#clusters,
# To prove that the threshold is sane, we average the sum of all probabilities:
scores = list(chain(*[[score for topic_id,score in topic] \
                      for topic in [doc for doc in lda_corpus]]))
threshold = sum(scores)/len(scores)
print threshold
print

cluster1 = [j for i,j in zip(lda_corpus,documents) if i[0][1] > threshold]
cluster2 = [j for i,j in zip(lda_corpus,documents) if i[1][1] > threshold]
cluster3 = [j for i,j in zip(lda_corpus,documents) if i[2][1] > threshold]

print cluster1
print cluster2
print cluster3

[out]:

0.131*trees + 0.121*graph + 0.119*system + 0.115*user + 0.098*survey + 0.082*interface + 0.080*eps + 0.064*minors + 0.056*response + 0.056*computer
0.171*time + 0.171*user + 0.170*response + 0.082*survey + 0.080*computer + 0.079*system + 0.050*trees + 0.042*graph + 0.040*minors + 0.040*human
0.155*system + 0.150*human + 0.110*graph + 0.107*minors + 0.094*trees + 0.090*eps + 0.088*computer + 0.087*interface + 0.040*survey + 0.028*user

0.333333333333

['The EPS user interface management system', 'The generation of random binary unordered trees', 'The intersection graph of paths in trees', 'Graph minors A survey']
['A survey of user opinion of computer system response time', 'Relation of user perceived response time to error measurement']
['Human machine interface for lab abc computer applications', 'System and human system engineering testing of EPS', 'Graph minors IV Widths of trees and well quasi ordering']

只是为了更清楚:

# Find the threshold, let's set the threshold to be 1/#clusters,
# To prove that the threshold is sane, we average the sum of all probabilities:
scores = []
for doc in lda_corpus
    for topic in doc:
        for topic_id, score in topic:
            scores.append(score)
threshold = sum(scores)/len(scores)

上面的代码是所有文档的所有单词和所有主题的得分之和。 然后通过分数的数量对总和进行归一化。

【讨论】:

这看起来是个不错的解决方案!我发现的另一个解决方案是使用主题分布来进行 k-means 聚类。如此链接***.com/questions/6486738/… 所示,但我不确定如何实现它。你知道怎么做吗? 我也在尝试重新实现 brown (***.com/questions/20998832/…),但鉴于 (topic,prob) 元组,您可以从 ***.com/questions/20990538/… 尝试此脚本 根据您有多少主题,您如何使用更多集群? 通过删除 this question 中的独特词,我获得了更好的性能 你能更具体地解释一下这行代码吗? scores = list(chain(*[[score for topic,score in topic] \ for topic in [doc for doc in lda_corpus]])) threshold = sum(scores)/len(scores)【参考方案2】:

如果你想使用的技巧

cluster1 = [j for i,j in zip(lda_corpus,documents) if i[0][1] > threshold]
cluster2 = [j for i,j in zip(lda_corpus,documents) if i[1][1] > threshold]
cluster3 = [j for i,j in zip(lda_corpus,documents) if i[2][1] > threshold]

在 alvas 的上一个回答中,确保在 LdaModel 中设置 minimum_probability=0

gensim.models.ldamodel.LdaModel(corpus,
            num_topics=num_topics, id2word = dictionary,
            passes=2, minimum_probability=0)

否则,lda_corpus 和文档的维度可能不一致,因为 gensim 会抑制概率低于 minimum_probability 的任何语料库。

将文档分组为主题的另一种方法是根据最大概率分配主题

    lda_corpus = [max(prob,key=lambda y:y[1])
                    for prob in lda[mm] ]
    playlists = [[] for i in xrange(topic_num])]
    for i, x in enumerate(lda_corpus):
        playlists[x[0]].append(documents[i])

注意lda[mm] 粗略地说是一个列表列表,或二维矩阵。行数是文档数,列数是主题数。例如,每个矩阵元素都是 (3,0.82) 形式的元组。这里 3 是指主题索引,0.82 是该主题的相应概率。默认情况下,minimum_probability=0.01 和任何概率小于 0.01 的元组在lda[mm] 中被忽略。如果您使用最大概率的分组方法,则可以将其设置为 1/#topics。

【讨论】:

是的,设置最大概率也是我后来的想法!感谢您展示实现? 嘿@nos,你能解释一下代码的第一部分是做什么的:特别是 [0][1] > 阈值部分吗?这些数字代表什么? @AndresAzqueta lda_corpus 的元素形式为 [(0, p0), (1, p1), ...],其中第一个数字是主题索引,第二个数字是属于该主题的文档的相应概率。如果有 N 个主题,则该列表包含 N 个元组。但是,如果 minimum_probability 不为 0,则概率低于 minimum_probability 的元组不包含在该列表中。 嘿@nos,非常感谢您的回答。因此,如果我有五个主题,那么系列将是:[0][1] > 阈值,[1][1] > 阈值,[2][1] > 阈值,[3][1] > 阈值,[4 ][1] > 阈值?谢谢【参考方案3】:

lda_corpus[i][j] 的形式为 [(0,t1),(0,t2)...,(0,t10),....(n,t10)] 其中第一项表示文档索引,第二项表示该特定文档中主题的概率。

【讨论】:

以上是关于主题分布:在python中做LDA后如何查看哪个文档属于哪个主题的主要内容,如果未能解决你的问题,请参考以下文章

LDA主题模型

NLP系列(三)LDA主题模型

lda舆情监测遇到的问题

无监督第五节:LDA (Latent Dirichlet Allocation算法细节)(主题模型)

[Python从零到壹] 十六.文本挖掘之词云热点与LDA主题分布分析万字详解

python LDA主题模型