Sklearn - 从Logistic回归中返回前三类
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Sklearn - 从Logistic回归中返回前三类相关的知识,希望对你有一定的参考价值。
我正在尝试创建一个模型,将客户的电子邮件分类(“案例原因”)。我已经清理了停止词等,并测试了几个不同的模型,Logistic回归是最准确的。问题是它只有70%的时间是准确的。这主要是因为数据的扩展问题(有少数案例原因可以获得大部分电子邮件。
我不想仅仅预测单一结果,而是尝试给代理商提供前三名(或者可能是5名)的选择。
这是我已经拥有的:
# vectorize the text
tfidf = TfidfVectorizer(sublinear_tf=True, min_df=5, norm='l2', encoding='latin-1',
ngram_range=(1, 2), stop_words=internal_stop_words)
features = tfidf.fit_transform(df.Description).toarray()
labels = df.category_id
features.shape
在对所有内容进行矢量化后,我通过以下块运行它来测试哪4个模型最适合。这表明Logistic回归率为70%,是四者中最好的:
models = [
RandomForestClassifier(n_estimators=200, max_depth=3, random_state=0),
LinearSVC(),
MultinomialNB(),
LogisticRegression(random_state=0),
]
CV = 5
cv_df = pd.DataFrame(index=range(CV * len(models)))
entries = []
for model in models:
model_name = model.__class__.__name__
accuracies = cross_val_score(model, features, labels, scoring='accuracy', cv=CV)
for fold_idx, accuracy in enumerate(accuracies):
entries.append((model_name, fold_idx, accuracy))
cv_df = pd.DataFrame(entries, columns=['model_name', 'fold_idx', 'accuracy'])
我创建了分类器,它是传递值的函数:
X_train, X_test, y_train, y_test = train_test_split(df['Description'], df['Reason'],
random_state = 0)
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(X_train)
tfidf_transformer = TfidfTransformer()
X_train_tfidf = tfidf_transformer.fit_transform(X_train_counts)
clf = LogisticRegression(solver='saga',multi_class='multinomial').fit(X_train_tfidf, y_train)
print(clf.predict(count_vect.transform(["""i dont know my password"""])))
['Reason #1']
在这种情况下,这不是正确的原因。我可以运行以下命令来获得一个表格,显示每个分类的概率:
#Test log res
probs = clf.predict_proba(count_vect.transform(["""I dont know my password"""]))
classes = clf.classes_
probs.shape = (len(category_to_id),1)
output = pd.DataFrame(data=[classes,probs]).T
output.columns= ['reason','prob']
output.sort_values(by='prob', ascending=False)
返回:
index reason prob
7 Reason #7 [0.6036937161535804]
6 Reason #6 [0.1576980112870697]
3 Reason #3 [0.13221805369421305]
13 Reason #13 [0.028848040868062686]
8 Reason #8 [0.02264491874676607]
9 Reason #9 [0.01725043255540921]
0 Reason #0 [0.01600640516713904]
10 Reason #10 [0.005444588928021622]
4 Reason #4 [0.0052240828713529894]
5 Reason #5 [0.0048409867159243045]
2 Reason #2 [0.0024794864823573935]
1 Reason #1 [0.0014065266971805264]
11 Reason #11 [0.001393613395266496]
12 Reason #12 [0.0008511364376563769]
所以我按最可能的原因排序,在这种情况下,#3是正确的答案。
如何将前N个结果返回到输入,以及测试N个结果之一中存在的实际原因的模型精度?
答案
您可以按降序对概率进行排序并检索top-n。要计算准确度,您可以定义自定义函数,如果y_true
属于top-n,则会将预测视为正确。沿着这些方向的东西应该有效:
probs = clf.predict_proba(X_test)
# Sort desc and only extract the top-n
top_n = np.argsort(probs)[:,:-n-1:-1]
# Calculate accuracy
true_preds = 0
for i in range(len(y_test)):
if y_test[i] in top_n[i]:
true_preds += 1
accuracy = true_preds/len(y_test)
以上是关于Sklearn - 从Logistic回归中返回前三类的主要内容,如果未能解决你的问题,请参考以下文章
如何消除 _logistic 回归上的 sklearn 警告
用于Logistic回归评估的Sklearn Python Log Loss引发了错误
基于sklearn进行线性回归logistic回归svm等的简单操作总结