Hadoop实战-MapReduce之倒排索引
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Hadoop实战-MapReduce之倒排索引相关的知识,希望对你有一定的参考价值。
倒排索引 (就是key和Value对调的显示结果)
一、需求:下面是用户播放音乐记录,统计歌曲被哪些用户播放过
tom LittleApple
jack YesterdayOnceMore
Rose MyHeartWillGoOn
jack LittleApple
John MyHeartWillGoOn
kissinger LittleApple
kissinger YesterdayOnceMore
二、最终的效果
LittleApple tom|jack|kissinger
YesterdayOnceMore jack | kissinger
MyHeartWillGoOn Rose | John
三、MapReduce代码
import java.io.IOException; import java.util.StringTokenizer; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapreduce.Reducer; import org.apache.hadoop.mapreduce.lib.input.FileInputFormat; import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; import org.apache.hadoop.util.GenericOptionsParser; public class Music { public static class MusicMap extends Mapper<Object, Text, Text, Text> { //private Text userName = new Text(); //private Text musicName = new Text(); @Override public void map(Object key, Text value, Context context) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { //tom,LittleApple //jack,YesterdayOnceMore String content = itr.nextToken(); String[] splits = content.split(","); String name = splits[0]; String music = splits[1]; context.write(new Text(music), new Text(name)); } } } public static class MusicReduce extends Reducer<Text, Text, Text, Text> { private Text userNames = new Text(); @Override public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException { userNames.set(""); StringBuffer result = new StringBuffer(); int i = 0; for (Text tempText : values) { result.append("value" + i + ":" + tempText.toString()+"\t"); i++; } userNames.set(result.toString()); context.write(key, userNames); } } public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); String[] otherArgs = new GenericOptionsParser(conf, args) .getRemainingArgs(); if (otherArgs.length != 2) { System.err.println("Usage: MinMaxCountDriver <in> <out>"); System.exit(2); } Job job = new Job(conf, "StackOverflow Comment Date Min Max Count"); job.setJarByClass(Music.class); job.setMapperClass(MusicMap.class); //job.setCombinerClass(MusicReduce.class); job.setReducerClass(MusicReduce.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(Text.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0])); FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } }
以上是关于Hadoop实战-MapReduce之倒排索引的主要内容,如果未能解决你的问题,请参考以下文章
大数据技术之_05_Hadoop学习_04_MapReduce_Hadoop企业优化(重中之重)+HDFS小文件优化方法+MapReduce扩展案例+倒排索引案例(多job串联)+TopN案例+找博客
2018-08-04 期 MapReduce倒排索引编程案例2(jobControll方式)