Hadoop MapReduce 入门实例
Posted bruce128
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Hadoop MapReduce 入门实例相关的知识,希望对你有一定的参考价值。
一、准备工作
- 从hadoop官网下载了最新的3.1.2版本的hadoop
- 配置hadoop相关的环境变量
export HADOOP_HOME=/work/dev_tools/hadoop-3.1.2
export PATH=$HADOOP_HOME/bin:$PATH:.
export HADOOP_CLASSPATH=$JAVA_HOME/lib/tools.jar
二、MapReduce代码示例
功能:给定文本,统计所有单词的词频
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import java.io.IOException;
import java.util.StringTokenizer;
/**
* @author lvsheng
* @date 2019-09-01
**/
public class WordCount
public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable>
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
@Override
public void map(Object key, Text value, Context context
) throws IOException, InterruptedException
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens())
word.set(itr.nextToken());
context.write(word, one);
public static class IntSumReducer extends Reducer<Text, IntWritable, Text, IntWritable>
private IntWritable result = new IntWritable();
@Override
public void reduce(Text key, Iterable<IntWritable> values,
Context context
) throws IOException, InterruptedException
int sum = 0;
for (IntWritable val : values)
sum += val.get();
result.set(sum);
context.write(key, result);
public static void main(String[] args) throws Exception
long start = System.currentTimeMillis();
Configuration conf = new Configuration();
Job job = Job.getInstance(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.waitForCompletion(true);
System.out.println("time cost : " + (System.currentTimeMillis() - start) / 1000 + " s");
注意:我的类放在默认的包下载,是没有包路径的。在有包路径的情况下,执行的时候报错,后面细说。
三、运行Job作业
- 先编译这个类
hadoop com.sun.tools.javac.Main WordCount.java
- 将编译好的字节码文件打jar包
jar cf WordCount.jar WordCount*.class
- 运行程序
hadoop jar WordCount.jar WordCount /Users/lvsheng/Movies/aclImdb/train/pos /temp/output2
我给的输入文件比较大,程序单机跑了一个多小时才出结果。
遇到的一个小问题
当我的作业类是有包路径的时候,运行程序的时候一致报找不到了类,不管是加了类路径还是没有加类路径。
带路径的执行命令:
✗ hadoop jar WordCount.jar com.alibaba.ruzun.WordCount /Users/lvsheng/Movies/aclImdb/train/pos /temp/output2
错误堆栈:
Exception in thread "main" java.lang.ClassNotFoundException: com.alibaba.ruzun.WordCount
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.util.RunJar.run(RunJar.java:311)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
把命令里的包路径去掉更不行。既然是包路径引起的,干脆把作业类移到java文件下,这样就没有包路径了,问题解决。至于怎么造成的,后面深入学习的时候再排查吧。
以上是关于Hadoop MapReduce 入门实例的主要内容,如果未能解决你的问题,请参考以下文章