启动hadoop的问题
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了启动hadoop的问题相关的知识,希望对你有一定的参考价值。
参考技术A 3、然也有可能datanode无法启动日志报错“ulimit -a for user root”
原因:datanamenode运行时打开文件数,达到系统最大限制
当前最大限制
[root@centos-FI hadoop-2.4.1]# ulimit -n
1024
解决:
调整最大打开文件数
[root@centos-FI hadoop-2.4.1]# ulimit -n 65536
[root@centos-FI hadoop-2.4.1]# ulimit -n
65536
再次启动hadoop
[root@centos-FI hadoop-2.4.1]# jps
6330 WebAppProxyServer
6097 NameNode
6214 ResourceManager
6148 DataNode
6441 Jps
6271 NodeManager
6390 JobHistoryServer
ps:ulimit命令只是临时修改,重启又恢复默认,可在/etc/security/limits.conf 里修改 nofile 的限制。
Hadoop问题:启动hadoop 2.6遇到的datanode启动不了
问题描述:第一次启动输入jps都有,第二次没有datanode
日志如下:
查看日志如下: 2014-12-22 12:08:27,264 INFO org.mortbay.log: Started [email protected]0.0.0.0:50075 2014-12-22 12:08:27,692 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName = root 2014-12-22 12:08:27,692 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup 2014-12-22 12:08:32,865 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue 2014-12-22 12:08:32,889 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020 2014-12-22 12:08:32,931 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020 2014-12-22 12:08:32,945 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null 2014-12-22 12:08:32,968 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default> 2014-12-22 12:08:32,992 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:8020 starting to offer service 2014-12-22 12:08:33,001 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting 2014-12-22 12:08:33,003 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting 2014-12-22 12:08:33,536 INFO org.apache.hadoop.hdfs.server.common.Storage: DataNode version: -56 and NameNode layout version: -60 2014-12-22 12:08:33,699 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/hadoop/tmp/dfs/data/in_use.lock acquired by nodename 17247@henry-ThinkPad-T400 2014-12-22 12:08:33,706 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:8020. Exiting. java.io.IOException: Incompatible clusterIDs in /home/hadoop/tmp/dfs/data: namenode clusterID = CID-19f887ba-2e8d-4c7e-ae01-e38a30581693; datanode clusterID = CID-14aac0b3-3c32-45db-adb8-b5fc494eaa3d at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:646) at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:320) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:403) at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:422) at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1311) at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1276) at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:314) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220) at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:828) at java.lang.Thread.run(Thread.java:662) 2014-12-22 12:08:33,716 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:8020 2014-12-22 12:08:33,718 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned) 2014-12-22 12:08:35,718 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode 2014-12-22 12:08:35,720 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0 2014-12-22 12:08:35,722 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: /************************************************************ SHUTDOWN_MSG: Shutting down DataNode at henry-ThinkPad-T400/127.0.0.1 ************************************************************/
问题分析:在第一次格式化dfs后,启动并使用了hadoop,后来又重新执行了格式化命令(hdfs namenode -format),
这时namenode的clusterID会重新生成,而datanode的clusterID 保持不变。
问题总结:datanode的clusterID 和 namenode的clusterID 不匹配。
解决办法:根据日志中的路径,cd /home/hadoop/tmp/dfs 能看到 data和name两个文件夹,
将name/current下的VERSION中的clusterID复制到data/current下的VERSION中,覆盖掉原来的clusterID,让两个保持一致
然后重启,启动后执行jps,查看进程
以上是关于启动hadoop的问题的主要内容,如果未能解决你的问题,请参考以下文章
Hadoop问题:启动hadoop 2.6遇到的datanode启动不了