使用 Python 示例对多项朴素贝叶斯分类器进行分类
Posted
技术标签:
【中文标题】使用 Python 示例对多项朴素贝叶斯分类器进行分类【英文标题】:Classifying Multinomial Naive Bayes Classifier with Python Example 【发布时间】:2013-07-02 07:55:54 【问题描述】:我正在寻找一个关于如何运行多项朴素贝叶斯分类器的简单示例。我从 *** 遇到了这个例子:
Implementing Bag-of-Words Naive-Bayes classifier in NLTK
import numpy as np
from nltk.probability import FreqDist
from nltk.classify import SklearnClassifier
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_selection import SelectKBest, chi2
from sklearn.naive_bayes import MultinomialNB
from sklearn.pipeline import Pipeline
pipeline = Pipeline([('tfidf', TfidfTransformer()),
('chi2', SelectKBest(chi2, k=1000)),
('nb', MultinomialNB())])
classif = SklearnClassifier(pipeline)
from nltk.corpus import movie_reviews
pos = [FreqDist(movie_reviews.words(i)) for i in movie_reviews.fileids('pos')]
neg = [FreqDist(movie_reviews.words(i)) for i in movie_reviews.fileids('neg')]
add_label = lambda lst, lab: [(x, lab) for x in lst]
#Original code from thread:
#classif.train(add_label(pos[:100], 'pos') + add_label(neg[:100], 'neg'))
classif.train(add_label(pos, 'pos') + add_label(neg, 'neg'))#Made changes here
#Original code from thread:
#l_pos = np.array(classif.batch_classify(pos[100:]))
#l_neg = np.array(classif.batch_classify(neg[100:]))
l_pos = np.array(classif.batch_classify(pos))#Made changes here
l_neg = np.array(classif.batch_classify(neg))#Made changes here
print "Confusion matrix:\n%d\t%d\n%d\t%d" % (
(l_pos == 'pos').sum(), (l_pos == 'neg').sum(),
(l_neg == 'pos').sum(), (l_neg == 'neg').sum())
运行此示例后,我收到了警告。
C:\Python27\lib\site-packages\scikit_learn-0.13.1-py2.7-win32.egg\sklearn\feature_selection\univariate_selection.py:327:
UserWarning: Duplicate scores. Result may depend on feature ordering.There are probably duplicate features,
or you used a classification score for a regression task.
warn("Duplicate scores. Result may depend on feature ordering."
Confusion matrix:
876 124
63 937
所以,我的问题是..
-
谁能告诉我这个错误信息是什么意思?
我对原始代码进行了一些更改,但为什么混淆矩阵的结果比原始线程中的结果高很多?
如何测试此分类器的准确性?
【问题讨论】:
【参考方案1】:原始代码对前 100 个正负样本进行训练,然后对其余样本进行分类。您已经删除了边界并在训练和分类阶段都使用了每个示例,换句话说,您有重复的特征。要解决此问题,请将数据集分成两组,训练和测试。
混淆矩阵更高(或不同),因为您正在使用不同的数据进行训练。
混淆矩阵是准确度的衡量标准,并显示误报的数量等。在此处阅读更多信息:http://en.wikipedia.org/wiki/Confusion_matrix
【讨论】:
如果有帮助请accept an answer【参考方案2】:我使用的原始代码仅包含训练集的前 100 个条目,但仍然有该警告。我的输出是:
In [6]: %run testclassifier.py
C:\Users\..\AppData\Local\Enthought\Canopy\User\lib\site-packages\sklearn\feature_selection\univariate_selecti
on.py:319: UserWarning: Duplicate scores. Result may depend on feature ordering.There are probably duplicate features, o
r you used a classification score for a regression task.
warn("Duplicate scores. Result may depend on feature ordering."
Confusion matrix:
427 473
132 768
【讨论】:
以上是关于使用 Python 示例对多项朴素贝叶斯分类器进行分类的主要内容,如果未能解决你的问题,请参考以下文章
scikit learn 使用多项式朴素贝叶斯作为三元分类器?