大数据入门第七天——MapReduce详解

Posted jiangbei

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了大数据入门第七天——MapReduce详解相关的知识,希望对你有一定的参考价值。

一、概述

  1.map-reduce是什么

技术分享图片
Hadoop MapReduce is a software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner.

A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system. The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks.

Typically the compute nodes and the storage nodes are the same, that is, the MapReduce framework and the Hadoop Distributed File System (see HDFS Architecture Guide) are running on the same set of nodes. This configuration allows the framework to effectively schedule tasks on the nodes where data is already present, resulting in very high aggregate bandwidth across the cluster.

The MapReduce framework consists of a single master ResourceManager, one worker NodeManager per cluster-node, and MRAppMaster per application (see YARN Architecture Guide).

Minimally, applications specify the input/output locations and supply map and reduce functions via implementations of appropriate interfaces and/or abstract-classes. These, and other job parameters, comprise the job configuration.

The Hadoop job client then submits the job (jar/executable etc.) and configuration to the ResourceManager which then assumes the responsibility of distributing the software/configuration to the workers, scheduling tasks and monitoring them, providing status and diagnostic information to the job-client.

Although the Hadoop framework is implemented in Java™, MapReduce applications need not be written in Java.

Hadoop Streaming is a utility which allows users to create and run jobs with any executables (e.g. shell utilities) as the mapper and/or the reducer.

Hadoop Pipes is a SWIG-compatible C++ API to implement MapReduce applications (non JNI™ based).
官网原文

  中文翻译:

概观

  Hadoop MapReduce是一个用于轻松编写应用程序的软件框架,它以可靠的容错方式在大型群集(数千个节点)的商品硬件上并行处理海量数据(多TB数据集)。
MapReduce 作业通常将输入数据集分割为独立的块,由地图任务以完全平行的方式进行处理。框架对映射的输出进行排序,然后输入到reduce任务。通常,作业的输入和输出都存储在文件系统中。该框架负责调度任务,监视它们并重新执行失败的任务。
通常,计算节点和存储节点是相同的,即MapReduce框架和Hadoop分布式文件系统(请参阅HDFS体系结构指南)在同一组节点上运行。此配置允许框架在数据已经存在的节点上有效地调度任务,从而在整个群集中带来非常高的聚合带宽。
MapReduce框架由单个主资源管理器,每个集群节点的一个工作者NodeManager和每个应用程序的MRAppMaster组成(参见YARN体系结构指南)。
最小程度上,应用程序通过实现适当的接口和/或抽象类来指定输入/输出位置并提供映射和减少函数。这些和其他作业参数组成作业配置。
然后,Hadoop 作业客户端将作业(jar /可执行文件等)和配置提交给ResourceManager,然后负责将软件/配置分发给工作人员,安排任务并对其进行监控,向作业提供状态和诊断信息客户。
虽然Hadoop框架是用Java™实现的,但MapReduce应用程序不需要用Java编写。
  Hadoop Streaming是一个实用程序,它允许用户使用任何可执行文件(例如shell实用程序)作为映射器和/或reducer来创??建和运行作业。
Hadoop Pipes是SWIG兼容的C ++ API来实现MapReduce应用程序(基于非JNI™)。

    用网友的小结来说:

  MapReduce的处理过程分为两个步骤:map和reduce。每个阶段的输入输出都是key-value的形式,key和value的类型可以自行指定。map阶段对切分好的数据进行并行处理,处理结果传输给reduce,由reduce函数完成最后的汇总。

 

以上是关于大数据入门第七天——MapReduce详解的主要内容,如果未能解决你的问题,请参考以下文章

大数据入门第十七天——storm上游数据源 之kafka详解入门

大数据入门第十七天——storm上游数据源 之kafka详解常用命令

大数据入门第八天——MapReduce详解

大数据入门第十四天——Hbase详解hbase基本原理与MR操作Hbase

python入门第七天

大数据入门第六天——HDFS详解