从训练数据中分类聊天机器人随机话语的意图,并使用随机森林给出不同的图形可视化?
Posted
技术标签:
【中文标题】从训练数据中分类聊天机器人随机话语的意图,并使用随机森林给出不同的图形可视化?【英文标题】:Classify intent of random utterance of chat bot from training data and give different graphical visualization using random forest? 【发布时间】:2020-01-14 00:36:30 【问题描述】:我正在创建一个 nlp 模型来检测来自我用于训练的 excel 文件中提供的话语的意图,该文件有 2 列,如下所示:
Utterence Intent
hi can I have an Apple Watch service
how much I will be paying monthly service
you still around YOU_THERE
are you still there YOU_THERE
you there YOU_THERE
Speak to me if you are there. YOU_THERE
you around YOU_THERE
训练文件中有大约 3000 个话语和许多意图。
我使用 scikit learn 模块训练了我的模型,我的代码如下所示。
from sklearn.feature_extraction.text import TfidfVectorizer
import pandas as pd
import numpy as np
import re
def preprocessing(userQuery):
letters_only = re.sub("[^a-zA-Z\\d]", " ", userQuery)
words = letters_only.lower().split()
return( " ".join(words ))
#read utterance data from a xlsx file
train = pd.read_excel('training.xlsx')
query_features = train['Utterence']
#create tfidf
tfidf_vectorizer = TfidfVectorizer(ngram_range=(1, 1))
new_query = [preprocessing(query) for query in query_features]
features = tfidf_vectorizer.fit_transform(new_query).toarray()
#create random forest classification model
model = RandomForestClassifier()
model.fit(features, train['Intent'])
#intent prediction on user query
userQuery = "I want apple watch"
userQueryList=[]
userQueryList.append(preprocessing(userQuery))
utfidf = tfidf_vectorizer.transform(userQueryList)
print(" prediction: ", model.predict(utfidf))
对我来说,这里的问题之一是:例如:当我为 utterance I want apple watch
运行时,它给出的预测意图为 you_there
而不是 service
,如下所示(确认上面的训练快照):
C:\U\AppData\Local\Continuum\anaconda3\lib\site-packages\sklearn\ensemble\forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
prediction: ['YOU_THERE']
请帮助我应该如何训练我的模型,我应该进行哪些更改来解决这些问题以及如何检查准确性?我还想看看图形可视化和 ROC 曲线是如何使用随机森林实现的。我对 NLP 不是很精通,如果有任何帮助,将不胜感激。
【问题讨论】:
【参考方案1】:您使用的词袋方法在序列数据上表现不佳。 对于您的问题,顺序是分类的材料。 我建议你使用 LSTM(在序列数据上表现更好)
【讨论】:
你能帮我展示一下相同的代码如何在 LSTM 中反映出来,我如何绘制 ROC 曲线和特征与预测值的关系图,或者如果 Github 上有任何来源 github.com/ashokc/…,忽略前几行,从第40行开始关注 有提到的问题陈述和培训文件,因此很难从代码中可视化问题陈述。您能否在我的代码中显示任何可以使用它的地方,以及如何生成 ROC 曲线。提前致谢【参考方案2】:让我们解决您的第一个问题:
我应该如何训练我的模型以及我应该做出哪些改变来解决这些问题
下面我使用 word2vec 方法,它不仅使用 TFIDF 方法将话语转换为向量(丢失该特定句子中包含的语义信息),它还保留了语义信息。
要了解有关 word2vec 的更多信息,请参阅此博客:
[1]https://www.analyticsvidhya.com/blog/2017/06/word-embeddings-count-word2veec/
以下是使用 word2vec 方法预测意图的代码(注意 - 它与您的代码相同,只是我没有使用 TFIDFVectorizer,而是使用 word2vec 来获取向量。此外,代码被分为不同的函数来获得良好的逻辑概述,这些名称将显而易见)。
import pandas as pd
import numpy as np
from gensim.models import Word2Vec
from sklearn import preprocessing
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
def preprocess_lower(token):
#utility for preprocessing
return token.lower()
def load_data(file_name):
# load a csv file in memory
return pd.read_csv(file_name)
def process_training_data(training_data):
# process the training data and split it between independent and dependent variables
training_sentences = [list(map(preprocess_lower,sentence.split(" "))) for sentence in list(training_data.Utterence.values)]
target_class = training_data.Intent.values
label_encoded_Y = preprocessing.LabelEncoder().fit_transform(list(target_class))
return target_class, training_sentences, label_encoded_Y
def process_user_query(training_data):
# process the training data and split it between independent and dependent variables
training_sentences = [list(map(preprocess_lower,sentence.split(" "))) for sentence in training_data]
return training_sentences
def train_word2vec_model(train_sentences_list):
# training word2vec on sentences list (inputted by user)
model = Word2Vec(train_sentences_list, size=100, window=4, min_count=1, workers=4)
return model
def convert_training_data_vectors(model, train_sentences_list):
#get the sentences average vector
training_sectences_vector = list()
for sentence in train_sentences_list:
sentence_vetor = [list(model.wv[token]) for token in sentence if token in model.wv.vocab ]
training_sectences_vector.append(list(np.mean(sentence_vetor, axis=0)))
return training_sectences_vector
def training_rf_prediction_model(training_data_vectors, label_encoded_Y):
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
# training model on user inputted data
rf_model = RandomForestClassifier()
# here use the split function and divide the data into training and testing
x_train,x_test,y_train,y_test=train_test_split(training_data_vectors,label_encoded_Y,
train_size=0.8,test_size=0.2)
rf_model.fit(x_train, y_train)
y_pred = rf_model.predict(x_test)
print(accuracy_score(y_test, y_pred))
return rf_model
def training_svm_prediction_model(training_data_vectors, label_encoded_Y):
svm_model = SVC(gamma='auto')
svm_model.fit(training_data_vectors, label_encoded_Y)
return svm_model
def process_data_flow(file_name):
training_data = load_data(file_name)
target_class, training_sentences, label_encoded_Y = process_training_data(training_data)
word2vec_model = train_word2vec_model(train_sentences_list=training_sentences)
training_data_vectors = convert_training_data_vectors(word2vec_model, train_sentences_list=training_sentences)
prediction_model = training_rf_prediction_model(training_data_vectors, label_encoded_Y)
#intent prediction on user query
userQuery = ["i want apple watch"]
user_query_vectors = convert_training_data_vectors(word2vec_model, process_user_query(userQuery))
predicted_class = prediction_model.predict(user_query_vectors)[0]
predicted_intent = target_class[list(label_encoded_Y).index(predicted_class)]
return predicted_intent
print("Predicted class: ", process_data_flow("sample_intent_data.csv"))
示例数据文件为csv格式,您只需将数据格式化并粘贴为以下格式:
#sample_input_data.csv
Utterence,Intent
hi can I have an Apple Watch,service
how much I will be paying monthly,service
you still around,YOU_THERE
are you still there,YOU_THERE
you there,YOU_THERE
Speak to me if you are there,YOU_THERE
you around,YOU_THERE
另外请注意,您的训练数据应该包含大量训练话语,用于工作方法的每个意图。
为了准确,您可以使用以下方法:
将数据分为训练和测试(提到分割比例):
x_train,x_test,y_train,y_test=train_test_split(training_vectors,label_encoded_Y,
train_size=0.8,
test_size=0.2)
训练模型后,在 x_test 上使用 predict 函数来获得预测。现在只需将模型中测试数据的预测与数据集中的实际数据进行匹配,您就可以轻松确定准确度。
编辑:在预测时添加了准确度分数计算。
【讨论】:
我明白了你的意思,但是我尝试了你的代码来满足我的要求,即1. Train and test my model on sample csv with Utterence and intent and get the accuracy first 2. Then run my model on different set of utterences csv to get the classifier classify their intent and reproduce in that csv file
,我可以使用 scikit 的分类报告计算召回精度和 f1 分数。你的代码有可能吗?
参考此链接,使用 sklearn 整合上述代码中的精度和召回率计算:scikit-learn.org/stable/auto_examples/model_selection/…
找到精确召回不是我能做的问题,问题是上述评论中的第 1 点和第 2 点,我无法从你的代码中做到...你能帮我吗这个
对于第一部分,您需要对 training_rf_prediction_model 函数进行更改(更改添加到上面的代码中)。
对于第二部分,您需要运行预测并在获得与 label_encoded_Y 匹配的输出后(仅用于预测数据)并使用混淆矩阵来推导精度和召回率。以上是关于从训练数据中分类聊天机器人随机话语的意图,并使用随机森林给出不同的图形可视化?的主要内容,如果未能解决你的问题,请参考以下文章