hadoop-2.7.6初始化源码分析

Posted tyxuancx

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了hadoop-2.7.6初始化源码分析相关的知识,希望对你有一定的参考价值。

基本配置如各种教程,现直接上初始化时的debug日志,开始源码分析,主要解决困扰已久的关键WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

  1 [email protected]:/opt/server/hadoop> hadoop namenode -format
  2 DEPRECATED: Use of this script to execute hdfs command is deprecated.
  3 Instead use the hdfs command for it.
  4 
  5 18/09/05 23:34:09 DEBUG util.Shell: setsid exited with exit code 0
  6 18/09/05 23:34:09 INFO namenode.NameNode: STARTUP_MSG: 
  7 /************************************************************
  8 STARTUP_MSG: Starting NameNode
  9 STARTUP_MSG: host = localhost/127.0.0.1
 10 STARTUP_MSG: args = [-format]
 11 STARTUP_MSG: version = 2.7.6
 12 STARTUP_MSG: classpath = /opt/server/hadoop-2.7.6/etc/hadoop:**********此处省略超长path*********
 13 
 14 STARTUP_MSG: build = https://shv@git-wip-us.apache.org/repos/asf/hadoop.git -r 085099c66cf28be31604560c376fa282e69282b8; compiled by ‘kshvachk‘ on 2018-04-18T01:33Z
 15 
 16 STARTUP_MSG: java = 1.8.0_162
 17 ************************************************************/
 18 
 19 18/09/05 23:34:09 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
 20 18/09/05 23:34:09 INFO namenode.NameNode: createNameNode [-format]
 21 18/09/05 23:34:09 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, always=false, about=, type=DEFAULT, valueName=Time, value=[Rate of successful kerberos logins and latency (milliseconds)])
 22 18/09/05 23:34:09 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, always=false, about=, type=DEFAULT, valueName=Time, value=[Rate of failed kerberos logins and latency (milliseconds)])
 23 18/09/05 23:34:09 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, always=false, about=, type=DEFAULT, valueName=Time, value=[GetGroups])
 24 18/09/05 23:34:09 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics
 25 18/09/05 23:34:09 DEBUG util.KerberosName: Kerberos krb5 configuration not found, setting default realm to empty
 26 18/09/05 23:34:09 DEBUG security.Groups: Creating new Groups object
 27 18/09/05 23:34:09 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
 28 Java HotSpot(TM) Server VM warning: You have loaded library /opt/server/hadoop-2.7.6/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
 29 Its highly recommended that you fix the library with execstack -c <libfile>, or link it with -z noexecstack.
 30 18/09/05 23:34:09 DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: /opt/server/hadoop-2.7.6/lib/native/libhadoop.so.1.0.0: /opt/server/hadoop-2.7.6/lib/native/libhadoop.so.1.0.0: wrong ELF class: ELFCLASS64 (Possible cause: architecture word width mismatch)
 31 18/09/05 23:34:09 DEBUG util.NativeCodeLoader: java.library.path=/opt/server/hadoop/lib/native
 32 18/09/05 23:34:09 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
 33 18/09/05 23:34:09 DEBUG util.PerformanceAdvisory: Falling back to shell based
 34 18/09/05 23:34:09 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
 35 18/09/05 23:34:09 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
 36 Formatting using clusterid: CID-e2fe9d04-8e24-48e1-8a2d-eea5fd09adad
 37 18/09/05 23:34:09 INFO namenode.FSNamesystem: No KeyProvider found.
 38 18/09/05 23:34:09 INFO namenode.FSNamesystem: fsLock is fair: true
 39 18/09/05 23:34:09 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
 40 18/09/05 23:34:09 DEBUG util.LightWeightHashSet: initial capacity=16, max load factor= 0.75, min load factor= 0.2
 41 18/09/05 23:34:09 DEBUG util.LightWeightHashSet: initial capacity=16, max load factor= 0.75, min load factor= 0.2
 42 18/09/05 23:34:09 DEBUG util.LightWeightHashSet: initial capacity=16, max load factor= 0.75, min load factor= 0.2
 43 18/09/05 23:34:09 DEBUG util.LightWeightHashSet: initial capacity=16, max load factor= 0.75, min load factor= 0.2
 44 18/09/05 23:34:09 DEBUG util.LightWeightHashSet: initial capacity=16, max load factor= 0.75, min load factor= 0.2
 45 18/09/05 23:34:09 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
 46 18/09/05 23:34:09 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
 47 18/09/05 23:34:09 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
 48 18/09/05 23:34:09 INFO blockmanagement.BlockManager: The block deletion will start around 2018 Sep 05 23:34:09
 49 18/09/05 23:34:09 INFO util.GSet: Computing capacity for map BlocksMap
 50 18/09/05 23:34:09 INFO util.GSet: VM type = 32-bit
 51 18/09/05 23:34:09 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
 52 18/09/05 23:34:09 INFO util.GSet: capacity = 2^22 = 4194304 entries
 53 18/09/05 23:34:09 DEBUG util.GSet: recommended=4194304, actual=4194304
 54 18/09/05 23:34:09 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
 55 18/09/05 23:34:09 INFO blockmanagement.BlockManager: defaultReplication = 3
 56 18/09/05 23:34:09 INFO blockmanagement.BlockManager: maxReplication = 512
 57 18/09/05 23:34:09 INFO blockmanagement.BlockManager: minReplication = 1
 58 18/09/05 23:34:09 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
 59 18/09/05 23:34:09 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
 60 18/09/05 23:34:09 INFO blockmanagement.BlockManager: encryptDataTransfer = false
 61 18/09/05 23:34:09 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
 62 18/09/05 23:34:09 DEBUG security.UserGroupInformation: hadoop login
 63 18/09/05 23:34:09 DEBUG security.UserGroupInformation: hadoop login commit
 64 18/09/05 23:34:09 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: root
 65 18/09/05 23:34:09 DEBUG security.UserGroupInformation: Using user: "UnixPrincipal: root" with name root
 66 18/09/05 23:34:09 DEBUG security.UserGroupInformation: User entry: "root"
 67 18/09/05 23:34:09 DEBUG security.UserGroupInformation: Assuming keytab is managed externally since logged in from subject.
 68 18/09/05 23:34:09 DEBUG security.UserGroupInformation: UGI loginUser:root (auth:SIMPLE)
 69 18/09/05 23:34:09 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
 70 18/09/05 23:34:09 INFO namenode.FSNamesystem: supergroup = supergroup
 71 18/09/05 23:34:09 INFO namenode.FSNamesystem: isPermissionEnabled = true
 72 18/09/05 23:34:09 INFO namenode.FSNamesystem: HA Enabled: false
 73 18/09/05 23:34:09 INFO namenode.FSNamesystem: Append Enabled: true
 74 18/09/05 23:34:10 INFO util.GSet: Computing capacity for map INodeMap
 75 18/09/05 23:34:10 INFO util.GSet: VM type = 32-bit
 76 18/09/05 23:34:10 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
 77 18/09/05 23:34:10 INFO util.GSet: capacity = 2^21 = 2097152 entries
 78 18/09/05 23:34:10 DEBUG util.GSet: recommended=2097152, actual=2097152
 79 18/09/05 23:34:10 INFO namenode.FSDirectory: ACLs enabled? false
 80 18/09/05 23:34:10 INFO namenode.FSDirectory: XAttrs enabled? true
 81 18/09/05 23:34:10 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
 82 18/09/05 23:34:10 INFO namenode.NameNode: Caching file names occuring more than 10 times
 83 18/09/05 23:34:10 INFO util.GSet: Computing capacity for map cachedBlocks
 84 18/09/05 23:34:10 INFO util.GSet: VM type = 32-bit
 85 18/09/05 23:34:10 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
 86 18/09/05 23:34:10 INFO util.GSet: capacity = 2^19 = 524288 entries
 87 18/09/05 23:34:10 DEBUG util.GSet: recommended=524288, actual=524288
 88 18/09/05 23:34:10 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
 89 18/09/05 23:34:10 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
 90 18/09/05 23:34:10 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
 91 18/09/05 23:34:10 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
 92 18/09/05 23:34:10 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
 93 18/09/05 23:34:10 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
 94 18/09/05 23:34:10 DEBUG impl.MetricsSystemImpl: NNTopUserOpCounts, Top N operations by user
 95 18/09/05 23:34:10 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
 96 18/09/05 23:34:10 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
 97 18/09/05 23:34:10 INFO util.GSet: Computing capacity for map NameNodeRetryCache
 98 18/09/05 23:34:10 INFO util.GSet: VM type = 32-bit
 99 18/09/05 23:34:10 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
100 18/09/05 23:34:10 INFO util.GSet: capacity = 2^16 = 65536 entries
101 18/09/05 23:34:10 DEBUG util.GSet: recommended=65536, actual=65536
102 18/09/05 23:34:10 DEBUG metrics.RetryCacheMetrics: Initialized MetricsRegistry{info=MetricsInfoImpl{name=RetryCache.NameNodeRetryCache, description=RetryCache.NameNodeRetryCache}, tags=[], metrics=[]}
103 18/09/05 23:34:10 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableCounterLong org.apache.hadoop.ipc.metrics.RetryCacheMetrics.cacheHit with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, always=false, about=, type=DEFAULT, valueName=Time, value=[Number of RetryCache hit])
104 18/09/05 23:34:10 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableCounterLong org.apache.hadoop.ipc.metrics.RetryCacheMetrics.cacheCleared with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, always=false, about=, type=DEFAULT, valueName=Time, value=[Number of RetryCache cleared])
105 18/09/05 23:34:10 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableCounterLong org.apache.hadoop.ipc.metrics.RetryCacheMetrics.cacheUpdated with annotation @org.apache.hadoop.metrics2.annotation.Metric(sampleName=Ops, always=false, about=, type=DEFAULT, valueName=Time, value=[Number of RetryCache updated])
106 18/09/05 23:34:10 DEBUG impl.MetricsSystemImpl: RetryCache.NameNodeRetryCache, Aggregate RetryCache metrics
107 18/09/05 23:34:10 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1010289756-127.0.0.1-1536161650094
108 18/09/05 23:34:10 INFO common.Storage: Storage directory /opt/workspace/hadoop/dfs/name has been successfully formatted.
109 18/09/05 23:34:10 INFO namenode.FSImageFormatProtobuf: Saving image file /opt/workspace/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
110 18/09/05 23:34:10 INFO namenode.FSImageFormatProtobuf: Image file /opt/workspace/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 321 bytes saved in 0 seconds.
111 18/09/05 23:34:10 DEBUG util.MD5FileUtils: Saved MD5 48fc96dd63e4cbc85241191d259c67cb to /opt/workspace/hadoop/dfs/name/current/fsimage_0000000000000000000.md5
112 18/09/05 23:34:10 DEBUG namenode.FSImage: renaming /opt/workspace/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 to /opt/workspace/hadoop/dfs/name/current/fsimage_0000000000000000000
113 18/09/05 23:34:10 DEBUG namenode.FSImageTransactionalStorageInspector: Checking file /opt/workspace/hadoop/dfs/name/current/VERSION
114 18/09/05 23:34:10 DEBUG namenode.FSImageTransactionalStorageInspector: Checking file /opt/workspace/hadoop/dfs/name/current/seen_txid
115 18/09/05 23:34:10 DEBUG namenode.FSImageTransactionalStorageInspector: Checking file /opt/workspace/hadoop/dfs/name/current/fsimage_0000000000000000000.md5
116 18/09/05 23:34:10 DEBUG namenode.FSImageTransactionalStorageInspector: Checking file /opt/workspace/hadoop/dfs/name/current/fsimage_0000000000000000000
117 18/09/05 23:34:10 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
118 18/09/05 23:34:10 DEBUG namenode.FileJournalManager: FileJournalManager(root=/opt/workspace/hadoop/dfs/name): selecting input streams starting at 0 (excluding inProgress) from among 0 candidate file(s)
119 18/09/05 23:34:10 INFO util.ExitUtil: Exiting with status 0
120 18/09/05 23:34:10 INFO namenode.NameNode: SHUTDOWN_MSG: 
121 /************************************************************
122 SHUTDOWN_MSG: Shutting down NameNode at localhost/127.0.0.1
123 ************************************************************/

 

源码分析

1.命令解析

#hadoop命令内部实现
1 bin=`which $0`
2 bin=`dirname ${bin}`
3 bin=`cd "$bin"; pwd`
4  
5 DEFAULT_LIBEXEC_DIR="$bin"/../libexec
6 
7 HADOOP_LIBEXEC_DIR=${HADOOP_LIBEXEC_DIR:-$DEFAULT_LIBEXEC_DIR}
8 . $HADOOP_LIBEXEC_DIR/hadoop-config.sh

  初始化使用的是Hadoop命令,而每次使用Hadoop命令会去执行一次hadoop-config.sh脚本设置一些Java和Hadoop的环境变量.参考Hadoop 2.2.0启动脚本——libexec/hadoop-config.sh,hdfs命令也会先执行hdfs-config.sh来设置环境变量.

       DEPRECATED: Use of this script to execute hdfs command is deprecated.提示该命令已经废弃,建议用相应的hdfs替代hadoop,据悉是hadoop-0.21.0版本开始就替换.相应的hadoop pipes|job|queue|mrgroups|mradmin|jobtracker|tasktracker也替换为mapred命令了

 1 #hdfs commands
 2 namenode|secondarynamenode|datanode|dfs|dfsadmin|fsck|balancer|fetchdt|oiv|dfsgroups|portmap|nfs3)
 3   echo "DEPRECATED: Use of this script to execute hdfs command is deprecated." 1>&2
 4   echo "Instead use the hdfs command for it." 1>&2
 5   echo "" 1>&2
 6   #try to locate hdfs and if present, delegate to it.  
 7   shift
 8   if [ -f "${HADOOP_HDFS_HOME}"/bin/hdfs ]; then
 9     exec "${HADOOP_HDFS_HOME}"/bin/hdfs ${COMMAND/dfsgroups/groups}  "[email protected]"
10   elif [ -f "${HADOOP_PREFIX}"/bin/hdfs ]; then
11     exec "${HADOOP_PREFIX}"/bin/hdfs ${COMMAND/dfsgroups/groups} "[email protected]"
12   else
13     echo "HADOOP_HDFS_HOME not found!"
14     exit 1
15   fi
16   ;;

 

以上是关于hadoop-2.7.6初始化源码分析的主要内容,如果未能解决你的问题,请参考以下文章

Hadoop 2.7.6_03_HDFS原理

Android 插件化VirtualApp 源码分析 ( 目前的 API 现状 | 安装应用源码分析 | 安装按钮执行的操作 | 返回到 HomeActivity 执行的操作 )(代码片段

Android 逆向整体加固脱壳 ( DEX 优化流程分析 | DexPrepare.cpp 中 dvmOptimizeDexFile() 方法分析 | /bin/dexopt 源码分析 )(代码片段

Android 事件分发事件分发源码分析 ( Activity 中各层级的事件传递 | Activity -> PhoneWindow -> DecorView -> ViewGroup )(代码片段

大数据Windows7Hadoop2.7.6

Hadoop-2.7.6编译