使用 R 中的 tm 包为多个语料库制作前 N 个频繁项的数据框

Posted

技术标签:

【中文标题】使用 R 中的 tm 包为多个语料库制作前 N 个频繁项的数据框【英文标题】:Make dataframe of top N frequent terms for multiple corpora using tm package in R 【发布时间】:2013-03-08 12:26:50 【问题描述】:

我用 R 中的 tm 包创建了几个 TermDocumentMatrixs。

我想在每组文档中找到 10 个最常用的术语,最终得到如下输出表:

corpus1   corpus2
"beach"   "city"
"sand"    "sidewalk"
...        ...
[10th most frequent word]

根据定义,findFreqTerms(corpus1,N) 返回出现 N 次或更多的所有术语。要手动执行此操作,我可以更改 N 直到返回 10 个左右的术语,但 findFreqTerms 的输出按字母顺序列出,因此除非我选择了正确的 N,否则我实际上不知道哪些是前 10 个。我怀疑这涉及到操纵 TDM 的内部结构,您可以使用 str(corpus1) 看到 R tm package create matrix of Nmost frequent terms 但这里的答案对我来说非常不透明,所以我想重新表述这个问题。

谢谢!

【问题讨论】:

【参考方案1】:

这是在文档术语矩阵中查找前 N 个术语的一种方法。简而言之,您将 dtm 转换为矩阵,然后按行总和排序:

# load text mining library    
library(tm)

# make corpus for text mining (data comes from package, for reproducibility) 
data("crude")
corpus <- Corpus(VectorSource(crude))

# process text (your methods may differ)
skipWords <- function(x) removeWords(x, stopwords("english"))
funcs <- list(tolower, removePunctuation, removeNumbers, stripWhitespace, skipWords)
a <- tm_map(corpus, FUN = tm_reduce, tmFuns = funcs)
a.dtm1 <- TermDocumentMatrix(a, control = list(wordLengths = c(3,10))) 

这是 Q 中的方法,它以字母顺序返回单词,但并不总是很有用,正如您所注意到的...

N <- 10
findFreqTerms(a.dtm1, N)

[1] "barrel"     "barrels"    "bpd"        "crude"      "dlrs"       "government" "industry"   "kuwait"    
[9] "market"     "meeting"    "minister"   "mln"        "month"      "official"   "oil"        "opec"      
[17] "pct"        "price"      "prices"     "production" "reuter"     "saudi"      "sheikh"     "the"       
[25] "world"

您可以按照以下方法获取前 N 个单词的数量:

m <- as.matrix(a.dtm1)
v <- sort(rowSums(m), decreasing=TRUE)
head(v, N)

oil prices   opec    mln    the    bpd   dlrs  crude market reuter 
86     48     47     31     26     23     23     21     21     20 

对于几个文档术语矩阵,您可以这样做:

# make a list of the dtms
dtm_list <- list(a.dtm1, b.dtm1, c.dtm1, d.dtm1)
# apply the rowsums function to each item of the list
lapply(dtm_list, function(x)  sort(rowSums(as.matrix(x)), decreasing=TRUE))

这是你想做的吗?

向 Ian Fellows 的 wordcloud 包致敬,这是我第一次看到这种方法的地方。

更新:在下面的评论之后,这里有更多细节......

这里有一些数据可以用来制作具有多个语料库的可重复示例:

examp1 <- "When discussing performance with colleagues, teaching, sending a bug report or searching for guidance on mailing lists and here on SO, a reproducible example is often asked and always helpful. What are your tips for creating an excellent example? How do you paste data structures from r in a text format? What other information should you include? Are there other tricks in addition to using dput(), dump() or structure()? When should you include library() or require() statements? Which reserved words should one avoid, in addition to c, df, data, etc? How does one make a great r reproducible example?"

examp2 <- "Sometimes the problem really isn't reproducible with a smaller piece of data, no matter how hard you try, and doesn't happen with synthetic data (although it's useful to show how you produced synthetic data sets that did not reproduce the problem, because it rules out some hypotheses). Posting the data to the web somewhere and providing a URL may be necessary. If the data can't be released to the public at large but could be shared at all, then you may be able to offer to e-mail it to interested parties (although this will cut down the number of people who will bother to work on it). I haven't actually seen this done, because people who can't release their data are sensitive about releasing it any form, but it would seem plausible that in some cases one could still post data if it were sufficiently anonymized/scrambled/corrupted slightly in some way. If you can't do either of these then you probably need to hire a consultant to solve your problem" 

examp3 <- "You are most likely to get good help with your R problem if you provide a reproducible example. A reproducible example allows someone else to recreate your problem by just copying and pasting R code. There are four things you need to include to make your example reproducible: required packages, data, code, and a description of your R environment. Packages should be loaded at the top of the script, so it's easy to see which ones the example needs. The easiest way to include data in an email is to use dput() to generate the R code to recreate it. For example, to recreate the mtcars dataset in R, I'd perform the following steps: Run dput(mtcars) in R Copy the output In my reproducible script, type mtcars <- then paste. Spend a little bit of time ensuring that your code is easy for others to read: make sure you've used spaces and your variable names are concise, but informative, use comments to indicate where your problem lies, do your best to remove everything that is not related to the problem. The shorter your code is, the easier it is to understand. Include the output of sessionInfo() as a comment. This summarises your R environment and makes it easy to check if you're using an out-of-date package. You can check you have actually made a reproducible example by starting up a fresh R session and pasting your script in. Before putting all of your code in an email, consider putting it on http://gist.github.com/. It will give your code nice syntax highlighting, and you don't have to worry about anything getting mangled by the email system."

examp4 <- "Do your homework before posting: If it is clear that you have done basic background research, you are far more likely to get an informative response. See also Further Resources further down this page. Do help.search(keyword) and apropos(keyword) with different keywords (type this at the R prompt). Do RSiteSearch(keyword) with different keywords (at the R prompt) to search R functions, contributed packages and R-Help postings. See ?RSiteSearch for further options and to restrict searches. Read the online help for relevant functions (type ?functionname, e.g., ?prod, at the R prompt) If something seems to have changed in R, look in the latest NEWS file on CRAN for information about it. Search the R-faq and the R-windows-faq if it might be relevant (http://cran.r-project.org/faqs.html) Read at least the relevant section in An Introduction to R If the function is from a package accompanying a book, e.g., the MASS package, consult the book before posting. The R Wiki has a section on finding functions and documentation"

examp5 <- "Before asking a technical question by e-mail, or in a newsgroup, or on a website chat board, do the following:  Try to find an answer by searching the archives of the forum you plan to post to. Try to find an answer by searching the Web. Try to find an answer by reading the manual. Try to find an answer by reading a FAQ. Try to find an answer by inspection or experimentation. Try to find an answer by asking a skilled friend. If you're a programmer, try to find an answer by reading the source code. When you ask your question, display the fact that you have done these things first; this will help establish that you're not being a lazy sponge and wasting people's time. Better yet, display what you have learned from doing these things. We like answering questions for people who have demonstrated they can learn from the answers. Use tactics like doing a Google search on the text of whatever error message you get (searching Google groups as well as Web pages). This might well take you straight to fix documentation or a mailing list thread answering your question. Even if it doesn't, saying “I googled on the following phrase but didn't get anything that looked promising” is a good thing to do in e-mail or news postings requesting help, if only because it records what searches won't help. It will also help to direct other people with similar problems to your thread by linking the search terms to what will hopefully be your problem and resolution thread. Take your time. Do not expect to be able to solve a complicated problem with a few seconds of Googling. Read and understand the FAQs, sit back, relax and give the problem some thought before approaching experts. Trust us, they will be able to tell from your questions how much reading and thinking you did, and will be more willing to help if you come prepared. Don't instantly fire your whole arsenal of questions just because your first search turned up no answers (or too many). Prepare your question. Think it through. Hasty-sounding questions get hasty answers, or none at all. The more you do to demonstrate that having put thought and effort into solving your problem before seeking help, the more likely you are to actually get help. Beware of asking the wrong question. If you ask one that is based on faulty assumptions, J. Random Hacker is quite likely to reply with a uselessly literal answer while thinking Stupid question..., and hoping the experience of getting what you asked for rather than what you needed will teach you a lesson."

现在让我们以通常的方式稍微处理一下示例文本。首先将字符向量转换为语料库。

library(tm)
list_examps <- lapply(1:5, function(i) eval(parse(text=paste0("examp",i))))
list_corpora <- lapply(1:length(list_examps), function(i) Corpus(VectorSource(list_examps[[i]])))

现在删除停用词、数字、标点符号等。

skipWords <- function(x) removeWords(x, stopwords("english"))
funcs <- list(tolower, removePunctuation, removeNumbers, stripWhitespace, skipWords)
list_corpora1 <- lapply(1:length(list_corpora), function(i) tm_map(list_corpora[[i]], FUN = tm_reduce, tmFuns = funcs))

将处理后的语料库转换为词条文档矩阵:

list_dtms <- lapply(1:length(list_corpora1), function(i) TermDocumentMatrix(list_corpora1[[i]], control = list(wordLengths = c(3,10))))

获取每个语料库中出现频率最高的词:

top_words <- lapply(1:length(list_dtms), function(x)  sort(rowSums(as.matrix(list_dtms[[x]])), decreasing=TRUE))

并按照指定的形式reshape成dataframe:

library(plyr)
top_words_df <- t(ldply(1:length(top_words), function(i)  head(names(top_words[[i]]),10)))
colnames(top_words_df) <- lapply(1:length(list_dtms), function(i) paste0("corpus",i))
top_words_df

    corpus1    corpus2      corpus3    corpus4     corpus5    
V1  "example"  "data"       "code"     "functions" "answer"   
V2  "addition" "people"     "example"  "prompt"    "help"     
V3  "data"     "synthetic"  "easy"     "relevant"  "try"      
V4  "how"      "able"       "email"    "book"      "question" 
V5  "include"  "actually"   "include"  "keywords"  "questions"
V6  "what"     "bother"     "recreate" "package"   "reading"  
V7  "when"     "consultant" "script"   "posting"   "answers"  
V8  "are"      "cut"        "check"    "read"      "people"   
V9  "avoid"    "form"       "data"     "search"    "search"   
V10 "bug"      "happen"     "mtcars"   "section"   "searching"

您能否对其进行调整以处理您的数据?如果不是,请编辑您的问题以更准确地显示您的数据是什么样的。

【讨论】:

谢谢!这很好,除了最后一步并没有让我到达那里 - 最终目标是在每个 dtms 中拥有一个包含前 N 个单词的数据框 - 比如说,一个长 df,其中包含 document_id 的列,一列表示术语,一列表示频率。如果我这样做data.frame(unlist(lapply...))[1:N],那么我会得到一个数据框,其中包含列表中第一个 dtm 的前 N ​​个术语,但行名是术语,频率是表中的内容。我还没有对列表做很多工作,所以我不确定如何继续前进。 是的列表可能有点难以适应,但是一旦您对它们感到满意,您就可以使用lapplyplyr 函数做各种方便的事情。我已经编辑了我的答案,以展示您如何从多个语料库转到您想要的数据框。关键是将您的语料库放入列表中。在不了解您的具体数据的情况下,我无法确定它是否适合您。试一试,让我知道。 感谢您的彻底回复,这正是我所需要的。

以上是关于使用 R 中的 tm 包为多个语料库制作前 N 个频繁项的数据框的主要内容,如果未能解决你的问题,请参考以下文章

如何使用 R 中的 TM 包将我的语料库元数据附加到我的 dtm 数据帧导出中

package:tm

从 R 中的许多 html 文件创建语料库

R包之tm:文本挖掘包

R语言文本处理中文分词并制作文字云

无法让 tm_map 使用 mc.cores 参数