hadoop第五课:java开发Map/Reduce
Posted 跃小云
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了hadoop第五课:java开发Map/Reduce相关的知识,希望对你有一定的参考价值。
配置系统环境变量HADOOP_HOME,指向hadoop安装目录(如果你不想招惹不必要的麻烦,不要在目录中包含空格或者中文字符)
把HADOOP_HOME/bin加到PATH环境变量(非必要,只是为了方便)
如果是在windows下开发,需要添加windows的库文件
把盘中共享的bin目录覆盖HADOOP_HOME/bin
如果还是不行,把其中的hadoop.dll复制到c:windowssystem32目录下,可能需要重启机器
建立新项目,引入hadoop需要的jar文件
代码WordMapper:
import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; public class WordMapper extends Mapper<LongWritable,Text, Text, IntWritable> { @Override protected void map(LongWritable key, Text value, Mapper<LongWritable, Text, Text, IntWritable>.Context context) throws IOException, InterruptedException { String line = value.toString(); String[] words = line.split(" "); for(String word : words) { context.write(new Text(word), new IntWritable(1)); } } }
代码WordReducer:
import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class WordReducer extends Reducer<Text, IntWritable, Text, LongWritable> { @Override protected void reduce(Text key, Iterable<IntWritable> values, Reducer<Text, IntWritable, Text, LongWritable>.Context context) throws IOException, InterruptedException { long count = 0; for(IntWritable v : values) { count += v.get(); } context.write(key, new LongWritable(count)); } }
代码Test:
import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class Test { public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); Job job = Job.getInstance(conf); job.setMapperClass(WordMapper.class); job.setReducerClass(WordReducer.class); job.setMapOutputKeyClass(Text.class); job.setMapOutputValueClass(IntWritable.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(LongWritable.class); FileInputFormat.setInputPaths(job, "c:/bigdata/hadoop/test/test.txt"); FileOutputFormat.setOutputPath(job, new Path("c:/bigdata/hadoop/test/out/")); job.waitForCompletion(true); } }
把hdfs中的文件拉到本地来运行
FileInputFormat.setInputPaths(job, "hdfs://master:9000/wcinput/"); FileOutputFormat.setOutputPath(job, new Path("hdfs://master:9000/wcoutput2/"));
注意这里是把hdfs文件拉到本地来运行,如果观察输出的话会观察到jobID带有local字样
同时这样的运行方式是不需要yarn的(自己停掉yarn服务做实验)
在远程服务器执行
conf.set("fs.defaultFS", "hdfs://master:9000/"); conf.set("mapreduce.job.jar", "target/wc.jar"); conf.set("mapreduce.framework.name", "yarn"); conf.set("yarn.resourcemanager.hostname", "master"); conf.set("mapreduce.app-submission.cross-platform", "true"); FileInputFormat.setInputPaths(job, "/wcinput/"); FileOutputFormat.setOutputPath(job, new Path("/wcoutput3/"));
如果遇到权限问题,配置执行时的虚拟机参数-DHADOOP_USER_NAME=root
也可以将hadoop的四个配置文件拿下来放到src根目录下,就不需要进行手工配置了,默认到classpath目录寻找
或者将配置文件放到别的地方,使用conf.addResource(.class.getClassLoader.getResourceAsStream)方式添加,不推荐使用绝对路径的方式
以上是关于hadoop第五课:java开发Map/Reduce的主要内容,如果未能解决你的问题,请参考以下文章