为啥这个使用Combiner 类的Hadoop 示例不能正常工作? (不要执行Combiner提供的“局部缩减”)

Posted

技术标签:

【中文标题】为啥这个使用Combiner 类的Hadoop 示例不能正常工作? (不要执行Combiner提供的“局部缩减”)【英文标题】:Why this Hadoop example that use Combiner class can't work properly? (don't perform the "local reduction" provided by the Combiner)为什么这个使用Combiner 类的Hadoop 示例不能正常工作? (不要执行Combiner提供的“局部缩减”) 【发布时间】:2016-02-13 18:48:54 【问题描述】:

我是 Hadoop 的新手,我正在做一些实验,尝试使用 Combiner 类在映射器的同一节点上本地执行 reduce 操作。我正在使用 Hadoop 1.2.1 版本。

所以我有这 3 个类:

1) WordCountWithCombiner.java

// Learning MapReduce by Nitesh Jain
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.conf.Configured;

/* 
 * Extend Configured class: g
 * Implement Tool interface:
 * 
 */
public class WordCountWithCombiner extends Configured implements Tool

  @Override
  public int run(String[] args) throws Exception 
    Configuration conf = getConf(); 

    Job job = new Job(conf, "MyJob");   // Job is a "dashboard" with levers to control the execution of the job

    job.setJarByClass(WordCountWithCombiner.class);             // Name of the driver class into the jar
    job.setJobName("Word Count With Combiners");    // Set the name of the job

    FileInputFormat.addInputPath(job, new Path(args[0]));           // The input file is the first paramether of the main() method
    FileOutputFormat.setOutputPath(job, new Path(args[1]));         // The output file is the second paramether of the main() method

    job.setMapperClass(WordCountMapper.class);          // Set the mapper class

    /* Set the combiner: the combiner is a reducer performed locally on the same mapper node (we are resusing the previous WordCountReduces
     * class because it perform the same task, but locally to the mapper):
     */
    job.setCombinerClass(WordCountReducer.class);
    job.setReducerClass(WordCountReducer.class);        // Set the reducer class

    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);

    return job.waitForCompletion(true) ? 0 : 1;

   

  public static void main(String[] args) throws Exception 
    /* The ToolRunner object is used to trigger the run() function which contains all the batch execution logic. 
     * What it does is gie the ability to set properties at the own time so we need not to write a single line of code to handle it
     */
    int exitCode = ToolRunner.run(new Configuration(), new WordCountWithCombiner(), args);
    System.exit(exitCode);



2) WordCountMapper.java

// Learning MapReduce by Nitesh J.
// Word Count Mapper. 
import java.io.IOException;
import java.util.StringTokenizer;

// Import KEY AND VALUES DATATYPE:
import org.apache.hadoop.io.IntWritable;    // Similiar to Int
import org.apache.hadoop.io.LongWritable;   // Similar to Long
import org.apache.hadoop.io.Text;           // Similar to String

import org.apache.hadoop.mapreduce.Mapper;

/* Every mapper class extend the Hadoop Mapper class.
 * @param input key (the progressive number)
 * @param input type (it is a word so something like a String)
 * @param output key
 * @param output value
 * 
 */
public class WordCountMapper extends Mapper<LongWritable, Text, Text, IntWritable> 

  private final static IntWritable one = new IntWritable(1);
  private Text word = new Text();

  /* Override the map() function defined by the Mapper extended class:
   * The input parameter have to match with these defined into the extended Mapper class
   * @param context: is used to cast the output key and value paired.
   * 
   * Tokenize the string into words and write these words into the context with words as key, and one (1) as value for each word
   */
  @Override
  public void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException 


      String line = value.toString();
      StringTokenizer itr = new StringTokenizer(line);

      while (itr.hasMoreTokens()) 
          //just added the below line to convert everything to lower case 
          word.set(itr.nextToken().toLowerCase());
          // the following check is that the word starts with an alphabet. 
          if(Character.isAlphabetic((word.toString().charAt(0))))
              context.write(word, one);
          
    
  


3) WordCountReducer.java

// Learning MapReduce by Nitesh Jain
import java.io.IOException;

import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer;

/* Every reduceer calss have to extender the Hadoop Reducer class
 * @param the mapper output key  (text, the word)
 * @param the mapper output value (the number of occurrence of the related word: 1)
 * @param the redurcer output key (the word)
 * @param the reducer output value (the number of occurrence of the related word)
 * Have to map the Mapper() param
 */
public class WordCountReducer extends Reducer<Text, IntWritable, Text, IntWritable> 

    /*
     * I have to override the reduce() function defined by the extended Reducer class
     * @param key: the current word
     * @param Iterable<IntWritable> values: because the input of the recudce() function is a key and a list of values associated to this key
     * @param context: collects the output <key, values> pairs
     */
    @Override
    public void reduce(Text key, Iterable<IntWritable> values, Context context)
        throws IOException, InterruptedException 

        int sum = 0;
        for (IntWritable value : values) 
          sum += value.get();
        
        context.write(key, new IntWritable(sum));
      


正如您在 WordCountWithCombiner 驱动程序类中看到的那样,我已将 WordCountReducer 类设置为组合器,以直接在映射器节点上执行缩减,通过以下行:

job.setCombinerClass(WordCountReducer.class);

然后我在 Hadoop 文件系统上有这个输入文件:

andrea@andrea-virtual-machine:~/workspace/HadoopExperiment/bin$ hadoop fs -cat  in
to be or not to be

我想对其进行操作。

如果我通过 ma​​preduce 的 2 阶段以经典方式执行前一批,它工作正常,实际上在 Linux 外壳中执行此语句:

andrea@andrea-virtual-machine:~/workspace/HadoopExperiment/bin$ hadoop jar WordCount.jar WordCountWithCombiner in out6

Hadoop 能正常工作,然后我得到了预期的结果:

andrea@andrea-virtual-machine:~/workspace/HadoopExperiment/bin$ hadoop fs -cat  out6/p*
be  2
not 1
or  1
to  2
andrea@andrea-virtual-machine:~/workspace/HadoopExperiment/bin$ 

好的,它工作正常。

问题是现在我不想执行 reduce 阶段,我希望得到相同的结果,因为我已经设置了在 reducer 的同一节点上执行相同操作的组合器。

因此,在 Linux shell 中,我执行了排除 reducer 阶段的语句:

hadoop jar WordCountWithCombiner.jar WordCountWithCombiner -D mapred.reduce.tasks=0 in out7

但它不能正常工作,因为这是我获得的(我发布整个输出以添加有关正在发生的事情的更多信息):

andrea@andrea-virtual-machine:~/workspace/HadoopExperiment/bin$ hadoop jar WordCountWithCombiner.jar WordCountWithCombiner -D mapred.reduce.tasks=0 in out7
16/02/13 19:43:44 INFO input.FileInputFormat: Total input paths to process : 1
16/02/13 19:43:44 INFO util.NativeCodeLoader: Loaded the native-hadoop library
16/02/13 19:43:44 WARN snappy.LoadSnappy: Snappy native library not loaded
16/02/13 19:43:45 INFO mapred.JobClient: Running job: job_201601242121_0008
16/02/13 19:43:46 INFO mapred.JobClient:  map 0% reduce 0%
16/02/13 19:44:00 INFO mapred.JobClient:  map 100% reduce 0%
16/02/13 19:44:05 INFO mapred.JobClient: Job complete: job_201601242121_0008
16/02/13 19:44:05 INFO mapred.JobClient: Counters: 19
16/02/13 19:44:05 INFO mapred.JobClient:   Job Counters 
16/02/13 19:44:05 INFO mapred.JobClient:     SLOTS_MILLIS_MAPS=18645
16/02/13 19:44:05 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
16/02/13 19:44:05 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
16/02/13 19:44:05 INFO mapred.JobClient:     Launched map tasks=1
16/02/13 19:44:05 INFO mapred.JobClient:     Data-local map tasks=1
16/02/13 19:44:05 INFO mapred.JobClient:     SLOTS_MILLIS_REDUCES=0
16/02/13 19:44:05 INFO mapred.JobClient:   File Output Format Counters 
16/02/13 19:44:05 INFO mapred.JobClient:     Bytes Written=31
16/02/13 19:44:05 INFO mapred.JobClient:   FileSystemCounters
16/02/13 19:44:05 INFO mapred.JobClient:     HDFS_BYTES_READ=120
16/02/13 19:44:05 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=55503
16/02/13 19:44:05 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=31
16/02/13 19:44:05 INFO mapred.JobClient:   File Input Format Counters 
16/02/13 19:44:05 INFO mapred.JobClient:     Bytes Read=19
16/02/13 19:44:05 INFO mapred.JobClient:   Map-Reduce Framework
16/02/13 19:44:05 INFO mapred.JobClient:     Map input records=1
16/02/13 19:44:05 INFO mapred.JobClient:     Physical memory (bytes) snapshot=93282304
16/02/13 19:44:05 INFO mapred.JobClient:     Spilled Records=0
16/02/13 19:44:05 INFO mapred.JobClient:     CPU time spent (ms)=2870
16/02/13 19:44:05 INFO mapred.JobClient:     Total committed heap usage (bytes)=58195968
16/02/13 19:44:05 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=682741760
16/02/13 19:44:05 INFO mapred.JobClient:     Map output records=6
16/02/13 19:44:05 INFO mapred.JobClient:     SPLIT_RAW_BYTES=101
andrea@andrea-virtual-machine:~/workspace/HadoopExperiment/bin$ hadoop fs -cat  out7/p*to   1
be  1
or  1
not 1
to  1
be  1

如您所见,Combiner 提供的局部缩减似乎不起作用。

为什么?我错过了什么?我该如何解决这个问题?

Tnx

【问题讨论】:

【参考方案1】:

不要假设组合器会运行。仅将组合器视为优化。不保证Combiner 可以运行您的所有数据。在某些不需要将数据溢出到磁盘的情况下,MapReduce 将完全跳过使用 Combiner。另请注意,Combiner 可能会在数据子集上运行多次!每次溢出都会运行一次。

因此,当 reducer 的数量设置为 0 时,实际上并不意味着它应该给出正确的结果,因为所有映射器数据都没有被 Combiners 覆盖。

【讨论】:

那么你的意思是只有在这种特定情况下,Combiner 才具有与 reducer 相同的逻辑,并且选择使用或不使用 combiner 是 Hadoop 根据其算法任意做出的?所以我不确定它会被执行吗?是吗?

以上是关于为啥这个使用Combiner 类的Hadoop 示例不能正常工作? (不要执行Combiner提供的“局部缩减”)的主要内容,如果未能解决你的问题,请参考以下文章

Hadoop实战:使用Combiner提高Map/Reduce程序效率

combiner hadoop

大数据之Hadoop(MapReduce): shuffle之Combiner合并

Hadoop学习之路(十八)MapReduce框架Combiner分区

Hadoop_MapReduce流程

大数据-Hadoop生态(19)-MapReduce框架原理-Combiner合并