如何以我只训练一次分类器的方式调整 NLTK Python 代码

Posted

技术标签:

【中文标题】如何以我只训练一次分类器的方式调整 NLTK Python 代码【英文标题】:How to tweak the NLTK Python code in such a way that I train the classifier only once 【发布时间】:2014-07-11 03:52:32 【问题描述】:

我尝试对大约 10000 个句子的庞大数据集执行情感分析。现在,当我使用 NLTK Python 代码使用朴素贝叶斯执行训练和测试时,每次需要对一组新句子进行分类时,我都会训练分类器。这需要很多时间。有没有办法可以将训练部分的输出用于分类,这样可以节省大量时间。这是我使用的 NLTK 代码。

import nltk
import re
import csv
#Read the tweets one by one and process it



def processTweet(tweet):
    # process the tweets
    #convert to lower case
    tweet = tweet.lower()
    #Convert www.* or https?://* to URL
    tweet = re.sub('((www\.[\s]+)|(https?://[^\s]+))','URL',tweet)
    #Convert @username to AT_USER
    tweet = re.sub('@[^\s]+','AT_USER',tweet)
    #Remove additional white spaces
    tweet = re.sub('[\s]+', ' ', tweet)
    #Replace #word with word
    tweet = re.sub(r'#([^\s]+)', r'\1', tweet)
    #trim
    tweet = tweet.strip('\'"')
    return tweet

def replaceTwoOrMore(s):
    #look for 2 or more repetitions of character and replace with the character itself
    pattern = re.compile(r"(.)\11,", re.DOTALL)
    return pattern.sub(r"\1\1", s)
#end

#start getStopWordList
def getStopWordList(stopWordListFileName):
    #read the stopwords file and build a list
    stopWords = []
    stopWords.append('AT_USER')
    stopWords.append('url')
    stopWords.append('URL')
    stopWords.append('rt')

    fp = open(stopWordListFileName)
    line = fp.readline()
    while line:
        word = line.strip()
        stopWords.append(word)
        line = fp.readline()
    fp.close()
    return stopWords
#end

#start getfeatureVector
def getFeatureVector(tweet):
    featureVector = []
    #split tweet into words
    words = tweet.split()
    for w in words:
        #replace two or more with two occurrences
        w = replaceTwoOrMore(w)
        #strip punctuation
        w = w.strip('\'"?,.')
        #check if the word starts with an alphabet
        val = re.search(r"^[a-zA-Z][a-zA-Z0-9]*$", w)

        #ignore if it is a stop word
        if(w in stopWords or val is None):
            continue
        else:
            featureVector.append(w.lower())
    return featureVector
#end

def extract_features(tweet):
    tweet_words = set(tweet)
    features = 
    for word in featureList:
        features['contains(%s)' % word] = (word in tweet_words)
    return features

inpTweets = csv.reader(open('sheet3.csv', 'rb'), delimiter=',')
stopWords = getStopWordList('stopwords.txt')
featureList = []



# Get tweet words
tweets = []
for row in inpTweets:
    sentiment = row[0]
    tweet = row[1]
    processedTweet = processTweet(tweet)
    featureVector = getFeatureVector(processedTweet)
    featureList.extend(featureVector)
    tweets.append((featureVector, sentiment));
#end loop

# Remove featureList duplicates
featureList = list(set(featureList))

# Extract feature vector for all tweets in one shote
training_set = nltk.classify.util.apply_features(extract_features, tweets)

NBClassifier = nltk.NaiveBayesClassifier.train(training_set)

ft = open("april2.tsv")
line = ft.readline()

fo = open("dunno.tsv", "w")

fo.seek(0,0)
while line:
    testTweet = line
    processedTestTweet = processTweet(testTweet)
    line1 = fo.write( NBClassifier.classify(extract_features(getFeatureVector(processedTestTweet))) + "\n");
    line = ft.readline()

fo.close()
ft.close()

【问题讨论】:

你试过酸洗分类器对象吗? This 可能会有所帮助。 非常感谢,。它有帮助! 【参考方案1】:

如果您想坚持使用 NLTK,请尝试 pickle,例如https://spaghetti-tagger.googlecode.com/svn/spaghetti.py,见https://docs.python.org/2/library/pickle.html

#-*- coding: utf8 -*-

from nltk import UnigramTagger as ut
from nltk import BigramTagger as bt
from cPickle import dump,load

def loadtagger(taggerfilename):
    infile = open(taggerfilename,'rb')
    tagger = load(infile); infile.close()
    return tagger

def traintag(corpusname, corpus):
    # Function to save tagger.
    def savetagger(tagfilename,tagger):
        outfile = open(tagfilename, 'wb')
        dump(tagger,outfile,-1); outfile.close()
        return
    # Training UnigramTagger.
    uni_tag = ut(corpus)
    savetagger(corpusname+'_unigram.tagger',uni_tag)
    # Training BigramTagger.
    bi_tag = bt(corpus)
    savetagger(corpusname+'_bigram.tagger',bi_tag)
    print "Tagger trained with",corpusname,"using" +\
                "UnigramTagger and BigramTagger."
    return

否则,请尝试其他机器学习库,例如 sklearn 或 shogun

【讨论】:

【参考方案2】:

NLTK 中的朴素贝叶斯分类器模块速度非常慢,因为它是一个纯 Python 实现。因此,请考虑使用不同的机器学习 (ML) 库,例如 sci-kit learn。

YS-L 的提示适用于使用 cPickle 目前对您的目的有好处,但是,如果您必须重新训练分类器,最好切换到不同的朴素贝叶斯实现。

【讨论】:

以上是关于如何以我只训练一次分类器的方式调整 NLTK Python 代码的主要内容,如果未能解决你的问题,请参考以下文章

如何更改 NLTK 中朴素贝叶斯分类器的平滑方法?

用 nltk 训练我自己的分类器后,如何将它加载到 textblob 中?

NLTK SklearnClassifier 包装数据

NLTK 朴素贝叶斯分类错误

使用 NLTK 的 SklearnClassifier 和 ClassifierBasedPOSTagger 构建自己的基于分类器的词性标注器

朴素贝叶斯的 nltk 词干和停用词