文本的词条化和向量化
Posted 杨鑫newlfe
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了文本的词条化和向量化相关的知识,希望对你有一定的参考价值。
<strong><span style="font-size:18px;">/*** * @author YangXin * @info 此代码展示了如何对文本中的所有单词进行编码, 然后产生每个单词编码的线性权重之和, * 从而将文本编码为向量。这是用StaticWordValueEncoder实现的,并且还要有办法将文本分解 * 或分析称单词。Mahout提供了编辑器,Lucene提供了分析器。 */ package unitFourteen; import java.io.IOException; import java.io.StringReader; import org.apache.commons.collections.bag.SynchronizedSortedBag; import org.apache.lucene.analysis.Analyzer; import org.apache.lucene.analysis.TokenStream; import org.apache.lucene.analysis.standard.StandardAnalyzer; import org.apache.lucene.analysis.tokenattributes.TermAttribute; import org.apache.lucene.util.Version; import org.apache.mahout.math.RandomAccessSparseVector; import org.apache.mahout.math.SequentialAccessSparseVector; import org.apache.mahout.math.Vector; import org.apache.mahout.vectorizer.encoders.FeatureVectorEncoder; import org.apache.mahout.vectorizer.encoders.StaticWordValueEncoder; public class TokenizingAndVectorizingText { public static void main(String[] args) throws IOException { FeatureVectorEncoder encoder = new StaticWordValueEncoder("text"); Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_31); StringReader in = new StringReader("text to magically vectorize"); TokenStream ts = analyzer.tokenStream("body", in); TermAttribute termAtt = ts.addAttribute(TermAttribute.class); Vector v1 = new RandomAccessSparseVector(100); while (ts.incrementToken()) { char[] termBuffer = termAtt.termBuffer(); int termLen = termAtt.termLength(); String w = new String(termBuffer, 0, termLen); encoder.addToVector(w, 1, v1); } System.out.printf("%s\n", new SequentialAccessSparseVector(v1)); } } </span></strong>
以上是关于文本的词条化和向量化的主要内容,如果未能解决你的问题,请参考以下文章
文本挖掘——文本特征TFIDF权重计算及文本向量空间VSM表示