Hadoop 切片机制

Posted IT备忘录

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Hadoop 切片机制相关的知识,希望对你有一定的参考价值。

切片机制源码:

①for (FileStatus file: files)   每个文件单独切片。

②long length = file.getLen()   获取文件大小。

③while (((double) bytesRemaining)/splitSize > SPLIT_SLOP)   SPLIT_SLOP值为1.1,如果一个文件大于切片大小,会被循环切成多片,如果切片后剩下的部分不足切片的1.1倍,则停止切片。

  /** 
   * Generate the list of files and make them into FileSplits.
   * @param job the job context
   * @throws IOException
   */
  public List<InputSplit> getSplits(JobContext job) throws IOException {
    StopWatch sw = new StopWatch().start();
    long minSize = Math.max(getFormatMinSplitSize(), getMinSplitSize(job));
    long maxSize = getMaxSplitSize(job);

    // generate splits
    List<InputSplit> splits = new ArrayList<InputSplit>();
    List<FileStatus> files = listStatus(job);
    for (FileStatus file: files) {
      Path path = file.getPath();
      long length = file.getLen();
      if (length != 0) {
        BlockLocation[] blkLocations;
        if (file instanceof LocatedFileStatus) {
          blkLocations = ((LocatedFileStatus) file).getBlockLocations();
        } else {
          FileSystem fs = path.getFileSystem(job.getConfiguration());
          blkLocations = fs.getFileBlockLocations(file, 0, length);
        }
        if (isSplitable(job, path)) {
          long blockSize = file.getBlockSize();
          long splitSize = computeSplitSize(blockSize, minSize, maxSize);

          long bytesRemaining = length;
          while (((double) bytesRemaining)/splitSize > SPLIT_SLOP) {
            int blkIndex = getBlockIndex(blkLocations, length-bytesRemaining);
            splits.add(makeSplit(path, length-bytesRemaining, splitSize,
                        blkLocations[blkIndex].getHosts(),
                        blkLocations[blkIndex].getCachedHosts()));
            bytesRemaining -= splitSize;
          }

          if (bytesRemaining != 0) {
            int blkIndex = getBlockIndex(blkLocations, length-bytesRemaining);
            splits.add(makeSplit(path, length-bytesRemaining, bytesRemaining,
                       blkLocations[blkIndex].getHosts(),
                       blkLocations[blkIndex].getCachedHosts()));
          }
        } else { // not splitable
          splits.add(makeSplit(path, 0, length, blkLocations[0].getHosts(),
                      blkLocations[0].getCachedHosts()));
        }
      } else { 
        //Create empty hosts array for zero length files
        splits.add(makeSplit(path, 0, length, new String[0]));
      }
    }
    // Save the number of input files for metrics/loadgen
    job.getConfiguration().setLong(NUM_INPUT_FILES, files.size());
    sw.stop();
    if (LOG.isDebugEnabled()) {
      LOG.debug("Total # of splits generated by getSplits: " + splits.size()
          + ", TimeTaken: " + sw.now(TimeUnit.MILLISECONDS));
    }
    return splits;
  }

切片大小计算方法

①其中minSize默认为1,maxSize默认为long的最大值。根据公式,切片大小默认就等于blockSize大小。

②如果想切片大小大于块大小,则配置参数,使minSize > blockSize;如果想切片小于块大小,则配置参数,使maxSize < blockSize。

  protected long computeSplitSize(long blockSize, long minSize,
                                  long maxSize) {
    return Math.max(minSize, Math.min(maxSize, blockSize));
  }

 

以上是关于Hadoop 切片机制的主要内容,如果未能解决你的问题,请参考以下文章

Hadoop 切片机制

大数据之Hadoop(MapReduce):FileInputFormat,CombineTextInputFormat切片机制

Hadoop基础(十七):MapReduce框架原理切片机制

大数据之Hadoop(MapReduce):切片与MapTask并行度决定机制

大数据技术之Hadoop(MapReduce)框架原理数据压缩

大数据技术之Hadoop(MapReduce)框架原理数据压缩