文本挖掘和情感分析的基础示例

Posted ATYUN订阅号

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了文本挖掘和情感分析的基础示例相关的知识,希望对你有一定的参考价值。

编译:yxy

出品:ATYUN订阅号


经过研究表明,在旅行者的决策过程中,TripAdvisor(猫途鹰,全球旅游点评网)正变得越来越重要。然而,了解TripAdvisor评分与数千个评论文本中的每一个的细微差别是很有挑战性的。为了更彻底地了解酒店客人的评论是否会影响酒店的加班表现,我从TripAdvisor截取了一家酒店 – 希尔顿夏威夷度假村(Hilton Hawaiian Village)的所有英语评论 (Web抓取的细节和Python代码在文末)。


加载库

01 library(dplyr)
02 library(readr)
03 library(lubridate)
04 library(ggplot2)
05 library(tidytext)
06 library(tidyverse)
07 library(stringr)
08 library(tidyr)
09 library(scales)
10 library(broom)
11 library(purrr)
12 library(widyr)
13 library(igraph)
14 library(ggraph)
15 library(SnowballC)
16 library(wordcloud)
17 library(reshape2)
18 theme_set(theme_minimal())


数据

1 df <- read_csv("Hilton_Hawaiian_Village_Waikiki_Beach_Resort-Honolulu_Oahu_Hawaii__en.csv")
2 df <- df[complete.cases(df), ]
3 df$review_date <- as.Date(df$review_date, format = "%d-%B-%y")
4 dim(df); min(df$review_date); max(df$review_date)

文本挖掘和情感分析的基础示例


在TripAdvisor上希尔顿夏威夷度假村共有13,701条评论,评论日期范围是2002-03-21到2018-08-02。

1 df %>%
2   count(Week = round_date(review_date, "week")) %>%
3   ggplot(aes(Week, n)) +
4   geom_line() +
5   ggtitle('The Number of Reviews Per Week')

文本挖掘和情感分析的基础示例

2014年底收到的每周评论数量最多。该酒店在那一周收到了70多条评论。


评论文本的文本挖掘

01 df <- tibble::rowid_to_column(df, "ID")
02 df <- df %>%
03   mutate(review_date = as.POSIXct(review_date, origin = "1970-01-01"),month = round_date(review_date, "month"))
04 review_words <- df %>%
05   distinct(review_body, .keep_all = TRUE) %>%
06   unnest_tokens(word, review_body, drop = FALSE) %>%
07   distinct(ID, word, .keep_all = TRUE) %>%
08   anti_join(stop_words, by = "word") %>%
09   filter(str_detect(word, "[^\d]")) %>%
10   group_by(word) %>%
11   mutate(word_total = n()) %>%
12   ungroup()
13 word_counts <- review_words %>%
14   count(word, sort = TRUE)
15 word_counts %>%
16   head(25) %>%
17   mutate(word = reorder(word, n)) %>%
18   ggplot(aes(word, n)) +
19   geom_col(fill = "lightblue") +
20   scale_y_continuous(labels = comma_format()) +
21   coord_flip() +
22   labs(title = "Most common words in review text 2002 to date",
23        subtitle = "Among 13,701 reviews; stop words removed",
24        y = "# of uses")

文本挖掘和情感分析的基础示例


我们肯定可以做得更好一些,将“stay ”和“stayed ”,“pool”和“pools ”合起来。这被称为词干,词干是将变形(或有时是衍生)的词语变回到词干,基词或根词格式的过程。

01 word_counts %>%
02   head(25) %>%
03   mutate(word = wordStem(word)) %>%
04   mutate(word = reorder(word, n)) %>%
05   ggplot(aes(word, n)) +
06   geom_col(fill = "lightblue") +
07   scale_y_continuous(labels = comma_format()) +
08   coord_flip() +
09   labs(title = "Most common words in review text 2002 to date",
10        subtitle = "Among 13,701 reviews; stop words removed and stemmed",
11        y = "# of uses")

文本挖掘和情感分析的基础示例


BIGRAM

我们经常想要了解评论中单词之间的关系。在评论文本中,有哪些常见的单词序列?给定一些单词,哪些单词最有可能跟随在这个单词后面?哪些词关联最紧密?因此,许多有趣的文本分析都是基于这种关联。当我们检查两个连续单词的对时,它被称为“bigram”(二元语法)。


那么,这家酒店的评论中最常见的bigram评论是什么?

01 review_bigrams <- df %>%
02   unnest_tokens(bigram, review_body, token = "ngrams", n = 2)
03 bigrams_separated <- review_bigrams %>%
04   separate(bigram, c("word1", "word2"), sep = " ")
05 bigrams_filtered <- bigrams_separated %>%
06   filter(!word1 %in% stop_words$word) %>%
07   filter(!word2 %in% stop_words$word)
08 bigram_counts <- bigrams_filtered %>%
09   count(word1, word2, sort = TRUE)
10 bigrams_united <- bigrams_filtered %>%
11   unite(bigram, word1, word2, sep = " ")
12 bigrams_united %>%
13   count(bigram, sort = TRUE)

文本挖掘和情感分析的基础示例

最常见的bigram是“rainbow tower”,其次是“hawaiian village”。


我们可以在单词网络中可视化bigram:

01 review_subject <- df %>%
02   unnest_tokens(word, review_body) %>%
03   anti_join(stop_words)
04 my_stopwords <- data_frame(word = c(as.character(1:10)))
05 review_subject <- review_subject %>%
06   anti_join(my_stopwords)
07 title_word_pairs <- review_subject %>%
08   pairwise_count(word, ID, sort = TRUE, upper = FALSE)
09 set.seed(1234)
10 title_word_pairs %>%
11   filter(n >= 1000) %>%
12   graph_from_data_frame() %>%
13   ggraph(layout = "fr") +
14   geom_edge_link(aes(edge_alpha = n, edge_width = n), edge_colour = "cyan4") +
15   geom_node_point(size = 5) +
16   geom_node_text(aes(label = name), repel = TRUE,
17                  point.padding = unit(0.2, "lines")) +
18   ggtitle('Word network in TripAdvisor reviews')
19   theme_void()

文本挖掘和情感分析的基础示例

上面显示了TripAdvisor评论中常见的bigram组合,显示了至少出现了1000次且不是停用词的单词。


网络图显示了前几个词(“hawaiian ”,“village ”,“ocean ”和“view ”)之间的紧密联系。然而,我们在网络中并没有看到清晰的聚类结构。


TRIGRAM

Bigram有时是不够的,让我们看看希尔顿夏威夷度假村在TripAdvisor评论中最常见的trigram(三元语法)?

01 review_trigrams <- df %>%
02   unnest_tokens(trigram, review_body, token = "ngrams", n = 3)
03
04 trigrams_separated <- review_trigrams %>%
05   separate(trigram, c("word1", "word2", "word3"), sep = " ")
06
07 trigrams_filtered <- trigrams_separated %>%
08   filter(!word1 %in% stop_words$word) %>%
09   filter(!word2 %in% stop_words$word) %>%
10   filter(!word3 %in% stop_words$word)
11
12 trigram_counts <- trigrams_filtered %>%
13   count(word1, word2, word3, sort = TRUE)
14
15 trigrams_united <- trigrams_filtered %>%
16   unite(trigram, word1, word2, word3, sep = " ")
17
18 trigrams_united %>%
19   count(trigram, sort = TRUE)

文本挖掘和情感分析的基础示例

最常见的trigram 是“hilton hawaiian village”,其次是“hilton hawaiian village”,依此类推。


评论中的重要的词汇趋势

随着时间的推移,哪些词语和话题变得更频繁(或者更频繁)了?这些可以让我们了解酒店不断变化的生态系统,例如服务,翻新,问题解决,让我们预测哪些话题的关联词将继续增长。


我们需要了解的问题是:在TripAdvisor评论中,随着时间的推移,哪些词的频率在增加?

01 reviews_per_month <- df %>%
02   group_by(month) %>%
03   summarize(month_total = n())
04 word_month_counts <- review_words %>%
05   filter(word_total >= 1000) %>%
06   count(word, month) %>%
07   complete(word, month, fill = list(n = 0)) %>%
08   inner_join(reviews_per_month, by = "month") %>%
09   mutate(percent = n / month_total) %>%
10   mutate(year = year(month) + yday(month) / 365)
11 mod <- ~ glm(cbind(n, month_total - n) ~ year, ., family = "binomial")
12 slopes <- word_month_counts %>%
13   nest(-word) %>%
14   mutate(model = map(data, mod)) %>%
15   unnest(map(model, tidy)) %>%
16   filter(term == "year") %>%
17   arrange(desc(estimate))
18 slopes %>%
19   head(9) %>%
20   inner_join(word_month_counts, by = "word") %>%
21   mutate(word = reorder(word, -estimate)) %>%
22   ggplot(aes(month, n / month_total, color = word)) +
23   geom_line(show.legend = FALSE) +
24   scale_y_continuous(labels = percent_format()) +
25   facet_wrap(~ word, scales = "free_y") +
26   expand_limits(y = 0) +
27   labs(x = "Year",
28        y = "Percentage of reviews containing this word",
29        title = "9 fastest growing words in TripAdvisor reviews",
30        subtitle = "Judged by growth rate over 15 years")

文本挖掘和情感分析的基础示例

在2010年之前,我们可以看到关于“friday fireworks”和“lagoon”的讨论高峰。像“resort fee”和“busy”这样的词在2005年之前增长最快。


在评论中,哪些词的频率在下降?

1 word_month_counts %>%
2   filter(word %in% c("service", "food")) %>%
3   ggplot(aes(month, n / month_total, color = word)) +
4   geom_line(size = 1, alpha = .8) +
5   scale_y_continuous(labels = percent_format()) +
6   expand_limits(y = 0) +
7   labs(x = "Year",
8        y = "Percentage of reviews containing this term", title = "service vs food in terms of reviewers interest")

文本挖掘和情感分析的基础示例

服务和食品都是2010年之前的主要话题。关于服务和食品的讨论在2003年左右的数据开始时达到顶峰,在2005年之后一直呈下降趋势,偶尔出现高峰。


情绪分析

情感分析广泛应用于客户反馈,需要分析的有:评论和调查结果,在线和社交媒体。它适用于从营销到客户服务以及临床医学的各种应用。


在我们的案例中,我们的目的是确定评论者(即酒店客人)对他过去对酒店的体验的看法。这种可能是判断或评价。


评论中最常见的正面和负面词汇。

01 reviews <- df %>%
02   filter(!is.na(review_body)) %>%
03   select(ID, review_body) %>%
04   group_by(row_number()) %>%
05   ungroup()
06 tidy_reviews <- reviews %>%
07   unnest_tokens(word, review_body)
08 tidy_reviews <- tidy_reviews %>%
09   anti_join(stop_words)
10
11 bing_word_counts <- tidy_reviews %>%
12   inner_join(get_sentiments("bing")) %>%
13   count(word, sentiment, sort = TRUE) %>%
14   ungroup()
15
16 bing_word_counts %>%
17   group_by(sentiment) %>%
18   top_n(10) %>%
19   ungroup() %>%
20   mutate(word = reorder(word, n)) %>%
21   ggplot(aes(word, n, fill = sentiment)) +
22   geom_col(show.legend = FALSE) +
23   facet_wrap(~sentiment, scales = "free") +
24   labs(y = "Contribution to sentiment", x = NULL) +
25   coord_flip() +
26   ggtitle('Words that contribute to positive and negative sentiment in the reviews')

文本挖掘和情感分析的基础示例

让我们试试另一个情绪库,看看结果是否相同。

01 contributions <- tidy_reviews %>%
02   inner_join(get_sentiments("afinn"), by = "word") %>%
03   group_by(word) %>%
04   summarize(occurences = n(),
05             contribution = sum(score))
06 contributions %>%
07   top_n(25, abs(contribution)) %>%
08   mutate(word = reorder(word, contribution)) %>%
09   ggplot(aes(word, contribution, fill = contribution > 0)) +
10   ggtitle('Words with the greatest contributions to positive/negative
11           sentiment in reviews') +
12   geom_col(show.legend = FALSE) +
13   coord_flip()

文本挖掘和情感分析的基础示例

有趣的是,“diamond ”(diamond head)被归类为积极的情绪。


这里有一个可能出现的问题,例如,“clean”,在不通的上下文,如前面带有“not”,则会产生负面情绪。事实上,在大多数unigram(一元模型)会有这个否定的问题。所以我们需要进行下一步:


使用Bigrams在情感分析中提供语境

我们想知道单词前面有“not”这样的单词的频率。

1 bigrams_separated %>%
2   filter(word1 == "not") %>%
3   count(word1, word2, sort = TRUE)

文本挖掘和情感分析的基础示例


数据中有850次单词“a”前面有单词“not”,而698次单词“the”前面单词“not”。但这些信息没有意义。

1 AFINN <- get_sentiments("afinn")
2 not_words <- bigrams_separated %>%
3   filter(word1 == "not") %>%
4   inner_join(AFINN, by = c(word2 = "word")) %>%
5   count(word2, score, sort = TRUE) %>%
6   ungroup()
7
8 not_words

文本挖掘和情感分析的基础示例


这告诉我们,在数据中,跟随“not”的最常见的情感关联词是“worth”,而跟随“not”的第二个常见情感关联词是“recommend”,这通常得分为2分。


那么,在我们的数据中,哪些词在错误的方向上做了最大的“贡献”呢?

01 not_words %>%
02   mutate(contribution = n * score) %>%
03   arrange(desc(abs(contribution))) %>%
04   head(20) %>%
05   mutate(word2 = reorder(word2, contribution)) %>%
06   ggplot(aes(word2, n * score, fill = n * score > 0)) +
07   geom_col(show.legend = FALSE) +
08   xlab("Words preceded by "not"") +
09   ylab("Sentiment score * number of occurrences") +
10   ggtitle('The 20 words preceded by "not" that had the greatest contribution to
11           sentiment scores, positive or negative direction') +
12   coord_flip()

文本挖掘和情感分析的基础示例

“not worth”,“not great”,“not good”,“not recommend”和“not like”的最大的错误识别原因,这使得文本看起来比实际上更积极。


除了“not”之外,还有其他词语否定后续词语,例如“no”,“never”和“without”。

01 negation_words <- c("not", "no", "never", "without")
02
03 negated_words <- bigrams_separated %>%
04   filter(word1 %in% negation_words) %>%
05   inner_join(AFINN, by = c(word2 = "word")) %>%
06   count(word1, word2, score, sort = TRUE) %>%
07   ungroup()
08
09 negated_words %>%
10   mutate(contribution = n * score,
11          word2 = reorder(paste(word2, word1, sep = "__"), contribution)) %>%
12   group_by(word1) %>%
13   top_n(12, abs(contribution)) %>%
14   ggplot(aes(word2, contribution, fill = n * score > 0)) +
15   geom_col(show.legend = FALSE) +
16   facet_wrap(~ word1, scales = "free") +
17   scale_x_discrete(labels = function(x) gsub("__.+$", "", x)) +
18   xlab("Words preceded by negation term") +
19   ylab("Sentiment score *# of occurrences") +
20   ggtitle('The most common positive or negative words to follow negations
21           such as "no", "not", "never" and "without"') +
22   coord_flip()

文本挖掘和情感分析的基础示例

看起来把一个词误认为是正面情绪的最大来源是“not worth/great/good/recommend”,而错误分类的负面情绪的最大来源是“not bad”和“no problem”。


最后,让我们找出最正面和最负面的评论。

1 sentiment_messages <- tidy_reviews %>%
2   inner_join(get_sentiments("afinn"), by = "word") %>%
3   group_by(ID) %>%
4   summarize(sentiment = mean(score),
5             words = n()) %>%
6   ungroup() %>%
7   filter(words >= 5)
8 sentiment_messages %>%
9   arrange(desc(sentiment))


最正面的评论ID是2363:

1 df [which(df $ ID == 2363),] $ review_body [1]

文本挖掘和情感分析的基础示例

1 sentiment_messages %>%
2   arrange(sentiment)

文本挖掘和情感分析的基础示例

最负面评论的ID为3748:

1 df [which(df $ ID == 3748),] $ review_body [1]


Github:https://github.com/susanli2016/Data-Analysis-with-R/blob/master/Text%20Mining%20Hilton%20Hawaiian%20Village%20TripAdvisor%20Reviews.Rmd

负责抓取的Python代码:https://github.com/susanli2016/NLP-with-Python/blob/master/Web%20scraping%20Hilton%20Hawaiian%20Village%20TripAdvisor%20Reviews.py






以上是关于文本挖掘和情感分析的基础示例的主要内容,如果未能解决你的问题,请参考以下文章

Python做文本挖掘的情感极性分析

应用Python做文本挖掘的情感极性分析

iDST-文本挖掘算法专家-情感分析&文本反垃圾-杭州

文本挖掘-避孕药主题情感分析

学术观点| 拿“双十一”开涮的文本挖掘:电商评论情感分析

PaperDaily|基于社交网络文本挖掘的品牌情感分析