为 TfidfVectorizer 使用准备数据(scikit learn)
Posted
技术标签:
【中文标题】为 TfidfVectorizer 使用准备数据(scikit learn)【英文标题】:Preparing data for TfidfVectorizer use (scikitlearn) 【发布时间】:2017-05-26 04:14:11 【问题描述】:我正在尝试使用 sklearn 的 TfIdfVectorizer。我遇到了麻烦,因为我的输入可能不符合 TfIdfVectorizer 的需求。我有一堆我加载并附加到列表中的 JSON,我现在希望它成为 TfIdfVectorizer 使用的语料库。
代码:
import json
import pandas
from sklearn.feature_extraction.text import TfidfVectorizer
train=pandas.read_csv("train.tsv", sep='\t')
documents=[]
for i,row in train.iterrows():
data = json.loads(row['boilerplate'].lower())
documents.append(data['body'])
vectorizer=TfidfVectorizer(min_df=1)
X = vectorizer.fit_transform(documents)
idf = vectorizer.idf_
print dict(zip(vectorizer.get_feature_names(), idf))
我收到以下错误:
Traceback (most recent call last):
File "<ipython-input-56-94a6b95b0745>", line 1, in <module>
runfile('C:/Users/Guinea Pig/Downloads/try.py', wdir='C:/Users/Guinea Pig/Downloads')
File "D:\Anaconda\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 585, in runfile
execfile(filename, namespace)
File "C:/Users/Guinea Pig/Downloads/try.py", line 19, in <module>
X = vectorizer.fit_transform(documents)
File "D:\Anaconda\lib\site-packages\sklearn\feature_extraction\text.py", line 1219, in fit_transform
X = super(TfidfVectorizer, self).fit_transform(raw_documents)
File "D:\Anaconda\lib\site-packages\sklearn\feature_extraction\text.py", line 780, in fit_transform
vocabulary, X = self._count_vocab(raw_documents, self.fixed_vocabulary)
File "D:\Anaconda\lib\site-packages\sklearn\feature_extraction\text.py", line 715, in _count_vocab
for feature in analyze(doc):
File "D:\Anaconda\lib\site-packages\sklearn\feature_extraction\text.py", line 229, in <lambda>
tokenize(preprocess(self.decode(doc))), stop_words)
File "D:\Anaconda\lib\site-packages\sklearn\feature_extraction\text.py", line 195, in <lambda>
return lambda x: strip_accents(x.lower())
AttributeError: 'NoneType' object has no attribute 'lower'
我知道文档数组由 Unicode 对象组成,而不是字符串对象,但我似乎无法解决这个问题。蚂蚁的想法?
【问题讨论】:
【参考方案1】:最后我用了:
str_docs=[]
for item in documents:
str_docs.append(documents[i].encode('utf-8'))
作为补充
【讨论】:
以上是关于为 TfidfVectorizer 使用准备数据(scikit learn)的主要内容,如果未能解决你的问题,请参考以下文章
在实践中如何使用 TfidfVectorizer 和元数据进行分类?
sklearn: TfidfVectorizer 中文处理及一些使用参数
如何在熊猫数据框上使用 sklearn TFIdfVectorizer