Hadoop程序运行中的Error-Error: org.apache.hadoop.hdfs.BlockMissingException
Posted 格格巫 MMQ!!
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Hadoop程序运行中的Error-Error: org.apache.hadoop.hdfs.BlockMissingException相关的知识,希望对你有一定的参考价值。
15/03/18 09:59:21 INFO mapreduce.Job: Task Id : attempt_1426641074924_0002_m_000000_2, Status : FAILED
Error: org.apache.hadoop.hdfs.BlockMissingException: Could not obtain block: BP-35642051-192.168.199.91-1419581604721:blk_1073743091_2267 file=/filein/file_128M.txt
at org.apache.hadoop.hdfs.DFSInputStream.chooseDataNode(DFSInputStream.java:882)
at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:563)
at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:793)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:840)
at java.io.DataInputStream.readFully(DataInputStream.java:195)
at java.io.DataInputStream.readFully(DataInputStream.java:169)
at com.mr.AESEn.DataRecordReader.nextKeyValue(DataRecordReader.java:94)
at org.apache.hadoop.mapred.MapTask
N
e
w
T
r
a
c
k
i
n
g
R
e
c
o
r
d
R
e
a
d
e
r
.
n
e
x
t
K
e
y
V
a
l
u
e
(
M
a
p
T
a
s
k
.
j
a
v
a
:
533
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
m
a
p
r
e
d
u
c
e
.
t
a
s
k
.
M
a
p
C
o
n
t
e
x
t
I
m
p
l
.
n
e
x
t
K
e
y
V
a
l
u
e
(
M
a
p
C
o
n
t
e
x
t
I
m
p
l
.
j
a
v
a
:
80
)
a
t
o
r
g
.
a
p
a
c
h
e
.
h
a
d
o
o
p
.
m
a
p
r
e
d
u
c
e
.
l
i
b
.
m
a
p
.
W
r
a
p
p
e
d
M
a
p
p
e
r
NewTrackingRecordReader.nextKeyValue(MapTask.java:533) at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80) at org.apache.hadoop.mapreduce.lib.map.WrappedMapper
NewTrackingRecordReader.nextKeyValue(MapTask.java:533)atorg.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(MapContextImpl.java:80)atorg.apache.hadoop.mapreduce.lib.map.WrappedMapperContext.nextKeyValue(WrappedMapper.java:91)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:764)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:340)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:163)
以上是MapReduce程序在运行过程中产生的Error-Error: org.apache.hadoop.hdfs.BlockMissingException。
在网上查了下,可能的原因有以下两种:一是datanode节点有break的,另一种是datanode之间的通信有问题。秉着如上的原因,我开始排错:
[hadoop@cMaster hadoop-2.5.2]$ ssh cSlave00
Last login: Tue Mar 17 08:38:10 2015 from missie-pc.lan
[hadoop@cSlave00 ~]$ jps
3952 Jps
2910 NodeManager
[hadoop@cMaster hadoop-2.5.2]$ ssh cSlave01
Last login: Tue Mar 17 08:38:13 2015 from missie-pc.lan
[hadoop@cSlave01 ~]$ jps
3051 NodeManager
2714 DataNode
4562 Jps
[hadoop@cMaster hadoop-2.5.2]$ ssh cSlave02
Last login: Tue Mar 17 08:38:15 2015 from missie-pc.lan
[hadoop@cSlave02 ~]$ jps
4154 Jps
2921 NodeManager
可以发现,cSlave00与cSlave02节点的DataNode都crash掉了。
于是:
1.关闭yarn与dfs
[hadoop@cMaster hadoop-2.5.2]$ sbin/stop-yarn.sh
[hadoop@cMaster hadoop-2.5.2]$ sbin/stop-dfs.sh
2.重新启动dfs与yarn
[hadoop@cMaster hadoop-2.5.2]$ sbin/start-dfs.sh
[hadoop@cMaster hadoop-2.5.2]$ sbin/start-yarn.sh
15/03/18 10:04:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
Starting namenodes on [cMaster]
cMaster: starting namenode, logging to /home/hadoop/hadoop-2.5.2/logs/hadoop-hadoop-namenode-cMaster.out
cSlave00: starting datanode, logging to /home/hadoop/hadoop-2.5.2/logs/hadoop-hadoop-datanode-cSlave00.out
cSlave02: starting datanode, logging to /home/hadoop/hadoop-2.5.2/logs/hadoop-hadoop-datanode-cSlave02.out
cSlave01: starting datanode, logging to /home/hadoop/hadoop-2.5.2/logs/hadoop-hadoop-datanode-cSlave01.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/hadoop-2.5.2/logs/hadoop-hadoop-secondarynamenode-cMaster.out
15/03/18 10:04:35 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop-2.5.2/logs/yarn-hadoop-resourcemanager-cMaster.out
cSlave01: starting nodemanager, logging to /home/hadoop/hadoop-2.5.2/logs/yarn-hadoop-nodemanager-cSlave01.out
cSlave02: starting nodemanager, logging to /home/hadoop/hadoop-2.5.2/logs/yarn-hadoop-nodemanager-cSlave02.out
cSlave00: starting nodemanager, logging to /home/hadoop/hadoop-2.5.2/logs/yarn-hadoop-nodemanager-cSlave00.out
如此,便能解决之前出现的Error: org.apache.hadoop.hdfs.BlockMissingException问题了。
以上是关于Hadoop程序运行中的Error-Error: org.apache.hadoop.hdfs.BlockMissingException的主要内容,如果未能解决你的问题,请参考以下文章