Hadoop 2.7.3 容器启动异常,由于 AM 容器退出代码而失败:127

Posted

技术标签:

【中文标题】Hadoop 2.7.3 容器启动异常,由于 AM 容器退出代码而失败:127【英文标题】:Hadoop 2.7.3 Exception from container-launch, failed due to AM Container Exit code:127 【发布时间】:2017-01-28 06:22:38 【问题描述】:

我已经安装了 Hadoop 2.7.3 稳定版。我设置了所有环境变量,如 JAVA_HOME、HADOOP_HOME、PATH 等。我配置了 yarn-site.xml、hdfs-site.xml、core-site.xml、mapred-site.xml。

我在 HDFS 中上传了示例文件。当我使用以下命令在 hadoop-mapreduce-examples-2.7.3.jar 中执行 wordcount 程序时

bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar grep input output 'as[a-z.]+'

它让我跟随异常。

    17/01/28 00:59:33 INFO input.FileInputFormat: Total input paths to process : 36
    17/01/28 00:59:33 INFO mapreduce.JobSubmitter: number of splits:36
    17/01/28 00:59:33 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1485582326336_0001
    17/01/28 00:59:34 INFO impl.YarnClientImpl: Submitted application application_1485582326336_0001
    17/01/28 00:59:34 INFO mapreduce.Job: The url to track the job: http://XXXX.local:8088/proxy/application_1485582326336_0001/
    17/01/28 00:59:34 INFO mapreduce.Job: Running job: job_1485582326336_0001
    17/01/28 00:59:38 INFO mapreduce.Job: Job job_1485582326336_0001 running in uber mode : false
    17/01/28 00:59:38 INFO mapreduce.Job:  map 0% reduce 0%
    17/01/28 00:59:38 INFO mapreduce.Job: Job job_1485582326336_0001 failed with state FAILED due to: Application application_1485582326336_0001 failed 2 times due to AM Container for appattempt_1485582326336_0001_000002 exited with  exitCode: 127
    For more detailed output, check application tracking page:http://XXXXXX.local:8088/cluster/app/application_1485582326336_0001Then, click on links to logs of each attempt.
    Diagnostics: Exception from container-launch.
    Container id: container_1485582326336_0001_02_000001
    Exit code: 127
    Stack trace: ExitCodeException exitCode=127: 
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:582)
        at org.apache.hadoop.util.Shell.run(Shell.java:479)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:773)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:212)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)


    Container exited with a non-zero exit code 127
    Failing this attempt. Failing the application.
    17/01/28 00:59:38 INFO mapreduce.Job: Counters: 0
    17/01/28 00:59:38 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
    17/01/28 00:59:38 INFO mapreduce.JobSubmitter: Cleaning up the staging area /tmp/hadoop-yarn/staging/sumitdeshmukh/.staging/job_1485582326336_0002
    org.apache.hadoop.mapreduce.lib.input.InvalidInputException: Input path does not exist: hdfs://localhost:9000/user/<username>/grep-temp-155204726

这个问题有什么解决办法吗?

【问题讨论】:

看来您正在运行grep 而不是wordcount。还有什么是inputoutput。路径?尝试在那里提供绝对路径。 当我在 hadoop-env.sh 中为 JAVA_HOME 和 JAVA 保留硬编码路径时,这个问题得到了解决。虽然设置了这些环境变量,但不知何故 Hadoop 无法识别它们。 【参考方案1】:

问题出在 java 主页上。 在 hadoop-env.sh 中,进行如下修改:

#export JAVA_HOME=$JAVA_HOME
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home

【讨论】:

以上是关于Hadoop 2.7.3 容器启动异常,由于 AM 容器退出代码而失败:127的主要内容,如果未能解决你的问题,请参考以下文章

重启电脑后Namenode无法启动(hadoop 2.7.3)

我的数据节点没有在 hadoop 2.7.3 多节点中启动

spark-shell启动失败

Hadoop全分布式的安装--hadoop-2.7.3

原创 Spark动手实践 1Hadoop2.7.3安装部署实际动手

CentOS6.5下安装Hadoop-2.7.3(图解教程)