Hadoop Map Reduce - Iterable上的嵌套循环 reduce中的值忽略将文本写入上下文时的文本结果

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Hadoop Map Reduce - Iterable上的嵌套循环 reduce中的值忽略将文本写入上下文时的文本结果相关的知识,希望对你有一定的参考价值。

我是hadoop的新手,我试图在一个简单的输入文件上运行map reduce(参见示例)。我试图使用两个for循环从属性列表中制作某种笛卡尔积,并且由于某种原因,我得到的结果值总是为空。我试图用它摇晃并最终只有在我迭代它时设置结果文本时才有效(我知道,这对我来说听起来也很奇怪)。如果你能帮助我理解这个问题,我会很感激,可能是我做错了。

这是我的输入文件。

A 1
B 2
C 1
D 2
C 2
E 1

我想得到以下输出:

1 A-C, A-E, C-E
2 B-C, B-D, C-D

所以我尝试实现以下map reduce类:public class DigitToPairOfLetters {

    public static class TokenizerMapper
            extends Mapper<Object, Text, Text, Text> {

        private Text digit = new Text();
        private Text letter = new Text();

        public void map(Object key, Text value, Context context
                ) throws IOException, InterruptedException {
            StringTokenizer itr = new StringTokenizer(value.toString());
            while (itr.hasMoreTokens()) {
                letter.set(itr.nextToken());
                digit.set(itr.nextToken());
                context.write(digit, letter);
            }
        }
    }

    public static class DigitToLetterReducer
            extends Reducer<Text, Text, Text, Text> {
        private Text result = new Text();

        public void reduce(Text key, Iterable<Text> values,
                Context context
                ) throws IOException, InterruptedException {
            List<String> valuesList = new ArrayList<>();
            for (Text value :values) {
                valuesList.add(value.toString());
            }
            StringBuilder builder = new StringBuilder();
            for (int i=0; i<valuesList.size(); i++) {
                for (int j=i+1; j<valuesList.size(); j++) {
                    builder.append(valuesList.get(i)).append(" 
").append(valuesList.get(j)).append(",");
                }
            }
            context.write(key, result);
        }
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf, "digit to letter");
        job.setJarByClass(DigitToPairOfLetters.class);
        job.setMapperClass(TokenizerMapper.class);
        job.setCombinerClass(DigitToLetterReducer.class);
        job.setReducerClass(DigitToLetterReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);
        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}

但是这段代码将为我提供以下空列表输出:

1
2

当我在for循环中添加set for result时,似乎工作正常:public class DigitToPairOfLetters {

    public static class TokenizerMapper
            extends Mapper<Object, Text, Text, Text> {

        private Text digit = new Text();
        private Text letter = new Text();

        public void map(Object key, Text value, Context context
                ) throws IOException, InterruptedException {
            StringTokenizer itr = new StringTokenizer(value.toString());
            while (itr.hasMoreTokens()) {
                letter.set(itr.nextToken());
                digit.set(itr.nextToken());
                context.write(digit, letter);
            }
        }
    }

    public static class DigitToLetterReducer
            extends Reducer<Text, Text, Text, Text> {
        private Text result = new Text();

        public void reduce(Text key, Iterable<Text> values,
                Context context
                ) throws IOException, InterruptedException {
            List<String> valuesList = new ArrayList<>();
            for (Text value :values) {
                valuesList.add(value.toString());
                // TODO: We set the valuesList in the result since otherwise the 
hadoop process will ignore the values
                // in it.
                result.set(valuesList.toString());
            }
            StringBuilder builder = new StringBuilder();
            for (int i=0; i<valuesList.size(); i++) {
                for (int j=i+1; j<valuesList.size(); j++) {
                    builder.append(valuesList.get(i)).append(" 
").append(valuesList.get(j)).append(",");
                    // TODO: We set the builder every iteration in the loop since otherwise the hadoop process will
                    // ignore the values
                    result.set(builder.toString());
                }
            }
            context.write(key, result);
        }
    }

    public static void main(String[] args) throws Exception {
        Configuration conf = new Configuration();
        Job job = Job.getInstance(conf, "digit to letter");
        job.setJarByClass(DigitToPairOfLetters.class);
        job.setMapperClass(TokenizerMapper.class);
        job.setCombinerClass(DigitToLetterReducer.class);
        job.setReducerClass(DigitToLetterReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);
        FileInputFormat.addInputPath(job, new Path(args[0]));
        FileOutputFormat.setOutputPath(job, new Path(args[1]));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}

这将给我以下结果:

1   [A C,A E,C E]
2   [B C,B D,C D]

我很感激你的帮助

以上是关于Hadoop Map Reduce - Iterable上的嵌套循环 reduce中的值忽略将文本写入上下文时的文本结果的主要内容,如果未能解决你的问题,请参考以下文章

hadoop如何分配job来map和reduce

hadoop中map和reduce的数量设置问题

python的map和reduce和Hadoop的MapReduce有啥关系

Hadoop 一个 Map 和多个 Reduce

Hadoop MapReduce 中的“Map”和“Reduce”函数

Hadoop Map/Reduce