NLTK Python word_tokenize [重复]

Posted

技术标签:

【中文标题】NLTK Python word_tokenize [重复]【英文标题】:NLTK Python word_tokenize [duplicate] 【发布时间】:2018-09-03 16:13:48 【问题描述】:

我已经加载了一个包含 6000 行句子的 txt 文件。我曾尝试split("/n")word_tokenize 的句子,但我得到以下错误:

Traceback (most recent call last):
  File "final.py", line 52, in <module>
    short_pos_words = word_tokenize(short_pos)
  File "/home/tuanct1997/anaconda2/lib/python2.7/site-packages/nltk/tokenize/__init__.py", line 128, in word_tokenize
    sentences = [text] if preserve_line else sent_tokenize(text, language)
  File "/home/tuanct1997/anaconda2/lib/python2.7/site-packages/nltk/tokenize/__init__.py", line 95, in sent_tokenize
    return tokenizer.tokenize(text)
  File "/home/tuanct1997/anaconda2/lib/python2.7/site-packages/nltk/tokenize/punkt.py", line 1237, in tokenize
    return list(self.sentences_from_text(text, realign_boundaries))
  File "/home/tuanct1997/anaconda2/lib/python2.7/site-packages/nltk/tokenize/punkt.py", line 1285, in sentences_from_text
    return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
  File "/home/tuanct1997/anaconda2/lib/python2.7/site-packages/nltk/tokenize/punkt.py", line 1276, in span_tokenize
    return [(sl.start, sl.stop) for sl in slices]
  File "/home/tuanct1997/anaconda2/lib/python2.7/site-packages/nltk/tokenize/punkt.py", line 1316, in _realign_boundaries
    for sl1, sl2 in _pair_iter(slices):
  File "/home/tuanct1997/anaconda2/lib/python2.7/site-packages/nltk/tokenize/punkt.py", line 313, in _pair_iter
    for el in it:
  File "/home/tuanct1997/anaconda2/lib/python2.7/site-packages/nltk/tokenize/punkt.py", line 1291, in _slices_from_text
    if self.text_contains_sentbreak(context):
  File "/home/tuanct1997/anaconda2/lib/python2.7/site-packages/nltk/tokenize/punkt.py", line 1337, in text_contains_sentbreak
    for t in self._annotate_tokens(self._tokenize_words(text)):
  File "/home/tuanct1997/anaconda2/lib/python2.7/site-packages/nltk/tokenize/punkt.py", line 1472, in _annotate_second_pass
    for t1, t2 in _pair_iter(tokens):
  File "/home/tuanct1997/anaconda2/lib/python2.7/site-packages/nltk/tokenize/punkt.py", line 312, in _pair_iter
    prev = next(it)
  File "/home/tuanct1997/anaconda2/lib/python2.7/site-packages/nltk/tokenize/punkt.py", line 581, in _annotate_first_pass
    for aug_tok in tokens:
  File "/home/tuanct1997/anaconda2/lib/python2.7/site-packages/nltk/tokenize/punkt.py", line 546, in _tokenize_words
    for line in plaintext.split('\n'):
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 6: ordinal not in range(128)

【问题讨论】:

【参考方案1】:

问题与文件内容的编码有关。假设您要将str 解码为UTF-8 unicode

选项 1(在 Python 3 中已弃用):

import sys
reload(sys)
sys.setdefaultencoding('utf8')

选项 2: 尝试打开文本文件时将encode 参数传递给open 函数:

f = open('/path/to/txt/file', 'r+', encoding="utf-8")

【讨论】:

我会删除选项 1。这是个坏建议。在 Python 2 中,使用 import io; io.open() 其实我已经找到了解决方案,我使用类似于选项 2 的 decode('utf-8')。非常感谢

以上是关于NLTK Python word_tokenize [重复]的主要内容,如果未能解决你的问题,请参考以下文章

如何在 Pandas 数据框中应用 NLTK word_tokenize 库以获取 Twitter 数据?

安装和使用nltk

如何使用 NLTK 分词器去除标点符号?

Python NLTK pos_tag 未返回正确的词性标记

如何用 Python 中的 NLTK 对中文进行分析和处理

Python 如何对输出的词频结果按字母顺序排序(NLTK)