windows下安装并启动hadoop2.7.2
Posted 不想下火车的人
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了windows下安装并启动hadoop2.7.2相关的知识,希望对你有一定的参考价值。
windows安装hadoop没必要倒腾Cygwin,直接解压官网下载hadoop安装包到本地->最小化配置4个基本文件->执行1条启动命令->完事。下面把这几步细化贴出来,以hadoop2.7.2为例。
1、下载hadoop安装包就不细说了:http://hadoop.apache.org/->左边点Releases->点mirror site->点http://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common->选2.7.2下载hadoop-2.7.2.tar.gz
2、解压也不细说了:复制到D盘根目录直接解压,出来D:\hadoop-2.7.2
3、去D:\hadoop-2.7.2\etc\hadoop找到下面4个文件并按如下最小配置粘贴
core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> </configuration>
hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file:/hadoop/data/dfs/namenode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file:/hadoop/data/dfs/datanode</value> </property> </configuration>
mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> </configuration>
yarn-site.xml
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> </configuration>
4、启动windows命令行窗口,进入hadoop-2.7.2\bin目录,执行2条命令,先格式化namenode再启动hadoop
D:\hadoop-2.7.2\bin>hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.
17/05/13 07:16:40 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = wulinfeng/192.168.8.5
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.7.2
STARTUP_MSG: classpath = D:\hadoop-2.7.2\etc\hadoop;D:\hadoop-2.7.2\share\hado
op\common\lib\activation-1.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\apached
s-i18n-2.0.0-M15.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\apacheds-kerberos-c
odec-2.0.0-M15.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\api-asn1-api-1.0.0-M2
0.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\api-util-1.0.0-M20.jar;D:\hadoop-2
.7.2\share\hadoop\common\lib\asm-3.2.jar;D:\hadoop-2.7.2\share\hadoop\common\lib
\avro-1.7.4.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-beanutils-1.7.0.
jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-beanutils-core-1.8.0.jar;D:hadoop-2.7.2\share\hadoop\common\lib\commons-cli-1.2.jar;D:\hadoop-2.7.2\share\h
adoop\common\lib\commons-codec-1.4.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\c
ommons-collections-3.2.2.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-com
press-1.4.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-configuration-1.
6.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-digester-1.8.jar;D:\hadoop
-2.7.2\share\hadoop\common\lib\commons-httpclient-3.1.jar;D:\hadoop-2.7.2\sharehadoop\common\lib\commons-io-2.4.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\com
mons-lang-2.6.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-logging-1.1.3.
jar;D:\hadoop-2.7.2\share\hadoop\common\lib\commons-math3-3.1.1.jar;D:\hadoop-2.
7.2\share\hadoop\common\lib\commons-net-3.1.jar;D:\hadoop-2.7.2\share\hadoop\com
mon\lib\curator-client-2.7.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\curator
-framework-2.7.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\curator-recipes-2.7
.1.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\gson-2.2.4.jar;D:\hadoop-2.7.2\sh
are\hadoop\common\lib\guava-11.0.2.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\h
adoop-annotations-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\hadoop-auth-
2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\hamcrest-core-1.3.jar;D:\hadoo
p-2.7.2\share\hadoop\common\lib\htrace-core-3.1.0-incubating.jar;D:\hadoop-2.7.2
\share\hadoop\common\lib\httpclient-4.2.5.jar;D:\hadoop-2.7.2\share\hadoop\commo
n\lib\httpcore-4.2.5.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jackson-core-as
l-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jackson-jaxrs-1.9.13.jar;D:
\hadoop-2.7.2\share\hadoop\common\lib\jackson-mapper-asl-1.9.13.jar;D:\hadoop-2.
7.2\share\hadoop\common\lib\jackson-xc-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\c
ommon\lib\java-xmlbuilder-0.4.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jaxb-a
pi-2.2.2.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jaxb-impl-2.2.3-1.jar;D:\ha
doop-2.7.2\share\hadoop\common\lib\jersey-core-1.9.jar;D:\hadoop-2.7.2\share\had
oop\common\lib\jersey-json-1.9.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jerse
y-server-1.9.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jets3t-0.9.0.jar;D:\had
oop-2.7.2\share\hadoop\common\lib\jettison-1.1.jar;D:\hadoop-2.7.2\share\hadoopcommon\lib\jetty-6.1.26.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jetty-util-6
.1.26.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\jsch-0.1.42.jar;D:\hadoop-2.7.
2\share\hadoop\common\lib\jsp-api-2.1.jar;D:\hadoop-2.7.2\share\hadoop\common\li
b\jsr305-3.0.0.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\junit-4.11.jar;D:\had
oop-2.7.2\share\hadoop\common\lib\log4j-1.2.17.jar;D:\hadoop-2.7.2\share\hadoopcommon\lib\mockito-all-1.8.5.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\netty-3
.6.2.Final.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\paranamer-2.3.jar;D:\hado
op-2.7.2\share\hadoop\common\lib\protobuf-java-2.5.0.jar;D:\hadoop-2.7.2\share\h
adoop\common\lib\servlet-api-2.5.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\slf
4j-api-1.7.10.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\slf4j-log4j12-1.7.10.j
ar;D:\hadoop-2.7.2\share\hadoop\common\lib\snappy-java-1.0.4.1.jar;D:\hadoop-2.7
.2\share\hadoop\common\lib\stax-api-1.0-2.jar;D:\hadoop-2.7.2\share\hadoop\commo
n\lib\xmlenc-0.52.jar;D:\hadoop-2.7.2\share\hadoop\common\lib\xz-1.0.jar;D:\hado
op-2.7.2\share\hadoop\common\lib\zookeeper-3.4.6.jar;D:\hadoop-2.7.2\share\hadoo
p\common\hadoop-common-2.7.2-tests.jar;D:\hadoop-2.7.2\share\hadoop\common\hadoo
p-common-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\common\hadoop-nfs-2.7.2.jar;D:\h
adoop-2.7.2\share\hadoop\hdfs;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\asm-3.2.jar;
D:\hadoop-2.7.2\share\hadoop\hdfs\lib\commons-cli-1.2.jar;D:\hadoop-2.7.2\sharehadoop\hdfs\lib\commons-codec-1.4.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\comm
ons-daemon-1.0.13.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\commons-io-2.4.jar;D
:\hadoop-2.7.2\share\hadoop\hdfs\lib\commons-lang-2.6.jar;D:\hadoop-2.7.2\sharehadoop\hdfs\lib\commons-logging-1.1.3.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\libguava-11.0.2.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\htrace-core-3.1.0-incubat
ing.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\jackson-core-asl-1.9.13.jar;D:\had
oop-2.7.2\share\hadoop\hdfs\lib\jackson-mapper-asl-1.9.13.jar;D:\hadoop-2.7.2\sh
are\hadoop\hdfs\lib\jersey-core-1.9.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\je
rsey-server-1.9.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\jetty-6.1.26.jar;D:\ha
doop-2.7.2\share\hadoop\hdfs\lib\jetty-util-6.1.26.jar;D:\hadoop-2.7.2\share\had
oop\hdfs\lib\jsr305-3.0.0.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\leveldbjni-a
ll-1.8.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\log4j-1.2.17.jar;D:\hadoop-2.7.
2\share\hadoop\hdfs\lib\netty-3.6.2.Final.jar;D:\hadoop-2.7.2\share\hadoop\hdfslib\netty-all-4.0.23.Final.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\protobuf-ja
va-2.5.0.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\servlet-api-2.5.jar;D:\hadoop
-2.7.2\share\hadoop\hdfs\lib\xercesImpl-2.9.1.jar;D:\hadoop-2.7.2\share\hadoop\h
dfs\lib\xml-apis-1.3.04.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\lib\xmlenc-0.52.ja
r;D:\hadoop-2.7.2\share\hadoop\hdfs\hadoop-hdfs-2.7.2-tests.jar;D:\hadoop-2.7.2share\hadoop\hdfs\hadoop-hdfs-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\hdfs\hadoop
-hdfs-nfs-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\activation-1.1.jar;D:hadoop-2.7.2\share\hadoop\yarn\lib\aopalliance-1.0.jar;D:\hadoop-2.7.2\share\had
oop\yarn\lib\asm-3.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\commons-cli-1.2.j
ar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\commons-codec-1.4.jar;D:\hadoop-2.7.2\s
hare\hadoop\yarn\lib\commons-collections-3.2.2.jar;D:\hadoop-2.7.2\share\hadoopyarn\lib\commons-compress-1.4.1.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\common
s-io-2.4.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\commons-lang-2.6.jar;D:\hadoo
p-2.7.2\share\hadoop\yarn\lib\commons-logging-1.1.3.jar;D:\hadoop-2.7.2\share\ha
doop\yarn\lib\guava-11.0.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\guice-3.0.j
ar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\guice-servlet-3.0.jar;D:\hadoop-2.7.2\s
hare\hadoop\yarn\lib\jackson-core-asl-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\ya
rn\lib\jackson-jaxrs-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jackson-ma
pper-asl-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jackson-xc-1.9.13.jar;
D:\hadoop-2.7.2\share\hadoop\yarn\lib\javax.inject-1.jar;D:\hadoop-2.7.2\share\h
adoop\yarn\lib\jaxb-api-2.2.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jaxb-imp
l-2.2.3-1.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jersey-client-1.9.jar;D:\had
oop-2.7.2\share\hadoop\yarn\lib\jersey-core-1.9.jar;D:\hadoop-2.7.2\share\hadoop
\yarn\lib\jersey-guice-1.9.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jersey-json
-1.9.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jersey-server-1.9.jar;D:\hadoop-2
.7.2\share\hadoop\yarn\lib\jettison-1.1.jar;D:\hadoop-2.7.2\share\hadoop\yarn\li
b\jetty-6.1.26.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\jetty-util-6.1.26.jar;D
:\hadoop-2.7.2\share\hadoop\yarn\lib\jsr305-3.0.0.jar;D:\hadoop-2.7.2\share\hado
op\yarn\lib\leveldbjni-all-1.8.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\log4j-1
.2.17.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\netty-3.6.2.Final.jar;D:\hadoop-
2.7.2\share\hadoop\yarn\lib\protobuf-java-2.5.0.jar;D:\hadoop-2.7.2\share\hadoop
\yarn\lib\servlet-api-2.5.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\stax-api-1.0
-2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\xz-1.0.jar;D:\hadoop-2.7.2\share\ha
doop\yarn\lib\zookeeper-3.4.6-tests.jar;D:\hadoop-2.7.2\share\hadoop\yarn\lib\zo
okeeper-3.4.6.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-api-2.7.2.jar;D:
\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-applications-distributedshell-2.7.2.
jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-applications-unmanaged-am-laun
cher-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-client-2.7.2.jar;D:
\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-common-2.7.2.jar;D:\hadoop-2.7.2\sha
re\hadoop\yarn\hadoop-yarn-registry-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarnhadoop-yarn-server-applicationhistoryservice-2.7.2.jar;D:\hadoop-2.7.2\share\had
oop\yarn\hadoop-yarn-server-common-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\h
adoop-yarn-server-nodemanager-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop
-yarn-server-resourcemanager-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-
yarn-server-sharedcachemanager-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoo
p-yarn-server-tests-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\yarn\hadoop-yarn-serv
er-web-proxy-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\aopalliance-1.
0.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\asm-3.2.jar;D:\hadoop-2.7.2\sha
re\hadoop\mapreduce\lib\avro-1.7.4.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\li
b\commons-compress-1.4.1.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\commons-
io-2.4.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\guice-3.0.jar;D:\hadoop-2.
7.2\share\hadoop\mapreduce\lib\guice-servlet-3.0.jar;D:\hadoop-2.7.2\share\hadoo
p\mapreduce\lib\hadoop-annotations-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapred
uce\lib\hamcrest-core-1.3.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\jackson
-core-asl-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\jackson-mapper-a
sl-1.9.13.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\javax.inject-1.jar;D:\h
adoop-2.7.2\share\hadoop\mapreduce\lib\jersey-core-1.9.jar;D:\hadoop-2.7.2\share
\hadoop\mapreduce\lib\jersey-guice-1.9.jar;D:\hadoop-2.7.2\share\hadoop\mapreduc
e\lib\jersey-server-1.9.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\junit-4.1
1.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\leveldbjni-all-1.8.jar;D:\hadoo
p-2.7.2\share\hadoop\mapreduce\lib\log4j-1.2.17.jar;D:\hadoop-2.7.2\share\hadoop
\mapreduce\lib\netty-3.6.2.Final.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\libparanamer-2.3.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\protobuf-java-2.5.0
.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\lib\snappy-java-1.0.4.1.jar;D:\hadoo
p-2.7.2\share\hadoop\mapreduce\lib\xz-1.0.jar;D:\hadoop-2.7.2\share\hadoop\mapre
duce\hadoop-mapreduce-client-app-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduc
e\hadoop-mapreduce-client-common-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduc
e\hadoop-mapreduce-client-core-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreducehadoop-mapreduce-client-hs-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreduce\hado
op-mapreduce-client-hs-plugins-2.7.2.jar;D:\hadoop-2.7.2\share\hadoop\mapreducehadoop-mapreduce-client-jobclient-2.7.2-tests.jar;D:\hadoop-2.7.2\share\hadoop\m
apreduce\hadoop-mapreduce-client-jobclient-2.7.2.jar;D:\hadoop-2.7.2\share\hadoo
p\mapreduce\hadoop-mapreduce-client-shuffle-2.7.2.jar;D:\hadoop-2.7.2\share\hado
op\mapreduce\hadoop-mapreduce-examples-2.7.2.jar
STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r b16
5c4fe8a74265c792ce23f546c64604acf0e41; compiled by ‘jenkins‘ on 2016-01-26T00:08
Z
STARTUP_MSG: java = 1.8.0_101
************************************************************/
17/05/13 07:16:40 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-1284c5d0-592a-4a41-b185-e53fb57dcfbf
17/05/13 07:16:42 INFO namenode.FSNamesystem: No KeyProvider found.
17/05/13 07:16:42 INFO namenode.FSNamesystem: fsLock is fair:true
17/05/13 07:16:42 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.lim
it=1000
17/05/13 07:16:42 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.re
gistration.ip-hostname-check=true
17/05/13 07:16:42 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.
block.deletion.sec is set to 000:00:00:00.000
17/05/13 07:16:42 INFO blockmanagement.BlockManager: The block deletion will sta
rt around 2017 五月 13 07:16:42
17/05/13 07:16:42 INFO util.GSet: Computing capacity for map BlocksMap
17/05/13 07:16:42 INFO util.GSet: VM type = 64-bit
17/05/13 07:16:42 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
17/05/13 07:16:42 INFO util.GSet: capacity = 2^21 = 2097152 entries
17/05/13 07:16:42 INFO blockmanagement.BlockManager: dfs.block.access.token.enab
le=false
17/05/13 07:16:42 INFO blockmanagement.BlockManager: defaultReplication
= 1
17/05/13 07:16:42 INFO blockmanagement.BlockManager: maxReplication
= 512
17/05/13 07:16:42 INFO blockmanagement.BlockManager: minReplication
= 1
17/05/13 07:16:42 INFO blockmanagement.BlockManager: maxReplicationStreams
= 2
17/05/13 07:16:42 INFO blockmanagement.BlockManager: replicationRecheckInterval
= 3000
17/05/13 07:16:42 INFO blockmanagement.BlockManager: encryptDataTransfer
= false
17/05/13 07:16:42 INFO blockmanagement.BlockManager: maxNumBlocksToLog
= 1000
17/05/13 07:16:42 INFO namenode.FSNamesystem: fsOwner = Administrato
r (auth:SIMPLE)
17/05/13 07:16:42 INFO namenode.FSNamesystem: supergroup = supergroup
17/05/13 07:16:42 INFO namenode.FSNamesystem: isPermissionEnabled = true
17/05/13 07:16:42 INFO namenode.FSNamesystem: HA Enabled: false
17/05/13 07:16:42 INFO namenode.FSNamesystem: Append Enabled: true
17/05/13 07:16:43 INFO util.GSet: Computing capacity for map INodeMap
17/05/13 07:16:43 INFO util.GSet: VM type = 64-bit
17/05/13 07:16:43 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
17/05/13 07:16:43 INFO util.GSet: capacity = 2^20 = 1048576 entries
17/05/13 07:16:43 INFO namenode.FSDirectory: ACLs enabled? false
17/05/13 07:16:43 INFO namenode.FSDirectory: XAttrs enabled? true
17/05/13 07:16:43 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
17/05/13 07:16:43 INFO namenode.NameNode: Caching file names occuring more than
10 times
17/05/13 07:16:43 INFO util.GSet: Computing capacity for map cachedBlocks
17/05/13 07:16:43 INFO util.GSet: VM type = 64-bit
17/05/13 07:16:43 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
17/05/13 07:16:43 INFO util.GSet: capacity = 2^18 = 262144 entries
17/05/13 07:16:43 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pc
t = 0.9990000128746033
17/05/13 07:16:43 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanode
s = 0
17/05/13 07:16:43 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension
= 30000
17/05/13 07:16:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.n
um.buckets = 10
17/05/13 07:16:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.user
s = 10
17/05/13 07:16:43 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.
minutes = 1,5,25
17/05/13 07:16:43 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17/05/13 07:16:43 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total
heap and retry cache entry expiry time is 600000 millis
17/05/13 07:16:43 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/05/13 07:16:43 INFO util.GSet: VM type = 64-bit
17/05/13 07:16:43 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.
1 KB
17/05/13 07:16:43 INFO util.GSet: capacity = 2^15 = 32768 entries
17/05/13 07:16:43 INFO namenode.FSImage: Allocated new BlockPoolId: BP-664414510
-192.168.8.5-1494631003212
17/05/13 07:16:43 INFO common.Storage: Storage directory \hadoop\data\dfs\nameno
de has been successfully formatted.
17/05/13 07:16:43 INFO namenode.NNStorageRetentionManager: Going to retain 1 ima
ges with txid >= 0
17/05/13 07:16:43 INFO util.ExitUtil: Exiting with status 0
17/05/13 07:16:43 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at wulinfeng/192.168.8.5
************************************************************/
D:\hadoop-2.7.2\bin>cd ..\sbin
D:\hadoop-2.7.2\sbin>start-all.cmd
This script is Deprecated. Instead use start-dfs.cmd and start-yarn.cmd
starting yarn daemons
D:\hadoop-2.7.2\sbin>jps
4944 DataNode
5860 NodeManager
3532 Jps
7852 NameNode
7932 ResourceManager
D:\hadoop-2.7.2\sbin>
通过jps命令可以看到4个进程都拉起来了,用浏览器到localhost:8088看任务,到localhost:50070->Utilites->Browse the file system看hdfs。windows拉起上面4个进程时会弹出4个窗口,为了看下这4个进程都干啥了,我们把日志也一并贴出来:
DataNode
************************************************************/ 17/05/13 07:18:24 INFO impl.MetricsConfig: loaded properties from hadoop-metrics 2.properties 17/05/13 07:18:25 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 s econd(s). 17/05/13 07:18:25 INFO impl.MetricsSystemImpl: DataNode metrics system started 17/05/13 07:18:25 INFO datanode.BlockScanner: Initialized block scanner with tar getBytesPerSec 1048576 17/05/13 07:18:25 INFO datanode.DataNode: Configured hostname is wulinfeng 17/05/13 07:18:25 INFO datanode.DataNode: Starting DataNode with maxLockedMemory = 0 17/05/13 07:18:25 INFO datanode.DataNode: Opened streaming server at /0.0.0.0:50 010 17/05/13 07:18:25 INFO datanode.DataNode: Balancing bandwith is 1048576 bytes/s 17/05/13 07:18:25 INFO datanode.DataNode: Number threads for balancing is 5 17/05/13 07:18:25 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter (org.mortbay.log) via org.mortbay.log.Slf4jLog 17/05/13 07:18:26 INFO server.AuthenticationFilter: Unable to initialize FileSig nerSecretProvider, falling back to use random secrets. 17/05/13 07:18:26 INFO http.HttpRequestLog: Http request log for http.requests.d atanode is not defined 17/05/13 07:18:26 INFO http.HttpServer2: Added global filter ‘safety‘ (class=org .apache.hadoop.http.HttpServer2$QuotingInputFilter) 17/05/13 07:18:26 INFO http.HttpServer2: Added filter static_user_filter (class= org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context data node 17/05/13 07:18:26 INFO http.HttpServer2: Added filter static_user_filter (class= org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context stat ic 17/05/13 07:18:26 INFO http.HttpServer2: Added filter static_user_filter (class= org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 17/05/13 07:18:26 INFO http.HttpServer2: Jetty bound to port 53058 17/05/13 07:18:26 INFO mortbay.log: jetty-6.1.26 17/05/13 07:18:29 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWi [email protected]:53058 17/05/13 07:18:41 INFO web.DatanodeHttpServer: Listening HTTP traffic on /0.0.0. 0:50075 17/05/13 07:18:42 INFO datanode.DataNode: dnUserName = Administrator 17/05/13 07:18:42 INFO datanode.DataNode: supergroup = supergroup 17/05/13 07:18:42 INFO ipc.CallQueueManager: Using callQueue class java.util.con current.LinkedBlockingQueue 17/05/13 07:18:42 INFO ipc.Server: Starting Socket Reader #1 for port 50020 17/05/13 07:18:42 INFO datanode.DataNode: Opened IPC server at /0.0.0.0:50020 17/05/13 07:18:42 INFO datanode.DataNode: Refresh request received for nameservi ces: null 17/05/13 07:18:42 INFO datanode.DataNode: Starting BPOfferServices for nameservi ces: <default> 17/05/13 07:18:42 INFO ipc.Server: IPC Server listener on 50020: starting 17/05/13 07:18:42 INFO ipc.Server: IPC Server Responder: starting 17/05/13 07:18:42 INFO datanode.DataNode: Block pool <registering> (Datanode Uui d unassigned) service to localhost/127.0.0.1:9000 starting to offer service 17/05/13 07:18:43 INFO common.Storage: Lock on \hadoop\data\dfs\datanode\in_use. lock acquired by nodename [email protected] 17/05/13 07:18:43 INFO common.Storage: Storage directory \hadoop\data\dfs\datano de is not formatted for BP-664414510-192.168.8.5-1494631003212 17/05/13 07:18:43 INFO common.Storage: Formatting ... 17/05/13 07:18:43 INFO common.Storage: Analyzing storage directories for bpid BP -664414510-192.168.8.5-1494631003212 17/05/13 07:18:43 INFO common.Storage: Locking is disabled for \hadoop\data\dfsdatanode\current\BP-664414510-192.168.8.5-1494631003212 17/05/13 07:18:43 INFO common.Storage: Block pool storage directory \hadoop\data \dfs\datanode\current\BP-664414510-192.168.8.5-1494631003212 is not formatted fo r BP-664414510-192.168.8.5-1494631003212 17/05/13 07:18:43 INFO common.Storage: Formatting ... 17/05/13 07:18:43 INFO common.Storage: Formatting block pool BP-664414510-192.16 8.8.5-1494631003212 directory \hadoop\data\dfs\datanode\current\BP-664414510-192 .168.8.5-1494631003212\current 17/05/13 07:18:43 INFO datanode.DataNode: Setting up storage: nsid=61861794;bpid =BP-664414510-192.168.8.5-1494631003212;lv=-56;nsInfo=lv=-63;cid=CID-1284c5d0-59 2a-4a41-b185-e53fb57dcfbf;nsid=61861794;c=0;bpid=BP-664414510-192.168.8.5-149463 1003212;dnuuid=null 17/05/13 07:18:43 INFO datanode.DataNode: Generated and persisted new Datanode U UID e6e53ca9-b788-4c1c-9308-29b31be28705 17/05/13 07:18:43 INFO impl.FsDatasetImpl: Added new volume: DS-f2b82635-0df9-48 4f-9d12-4364a9279b20 17/05/13 07:18:43 INFO impl.FsDatasetImpl: Added volume - \hadoop\data\dfs\datan ode\current, StorageType: DISK 17/05/13 07:18:43 INFO impl.FsDatasetImpl: Registered FSDatasetState MBean 17/05/13 07:18:43 INFO impl.FsDatasetImpl: Adding block pool BP-664414510-192.16 8.8.5-1494631003212 17/05/13 07:18:43 INFO impl.FsDatasetImpl: Scanning block pool BP-664414510-192. 168.8.5-1494631003212 on volume D:\hadoop\data\dfs\datanode\current... 17/05/13 07:18:43 INFO impl.FsDatasetImpl: Time taken to scan block pool BP-6644 14510-192.168.8.5-1494631003212 on D:\hadoop\data\dfs\datanode\current: 15ms 17/05/13 07:18:43 INFO impl.FsDatasetImpl: Total time to scan all replicas for b lock pool BP-664414510-192.168.8.5-1494631003212: 20ms 17/05/13 07:18:43 INFO impl.FsDatasetImpl: Adding replicas to map for block pool BP-664414510-192.168.8.5-1494631003212 on volume D:\hadoop\data\dfs\datanode\cu rrent... 17/05/13 07:18:43 INFO impl.FsDatasetImpl: Time to add replicas to map for block pool BP-664414510-192.168.8.5-1494631003212 on volume D:\hadoop\data\dfs\datano de\current: 0ms 17/05/13 07:18:43 INFO impl.FsDatasetImpl: Total time to add all replicas to map : 17ms 17/05/13 07:18:44 INFO datanode.DirectoryScanner: Periodic Directory Tree Verifi cation scan starting at 1494650306107 with interval 21600000 17/05/13 07:18:44 INFO datanode.VolumeScanner: Now scanning bpid BP-664414510-19 2.168.8.5-1494631003212 on volume \hadoop\data\dfs\datanode 17/05/13 07:18:44 INFO datanode.VolumeScanner: VolumeScanner(\hadoop\data\dfs\da tanode, DS-f2b82635-0df9-484f-9d12-4364a9279b20): finished scanning block pool B P-664414510-192.168.8.5-1494631003212 17/05/13 07:18:44 INFO datanode.DataNode: Block pool BP-664414510-192.168.8.5-14 94631003212 (Datanode Uuid null) service to localhost/127.0.0.1:9000 beginning h andshake with NN 17/05/13 07:18:44 INFO datanode.VolumeScanner: VolumeScanner(\hadoop\data\dfs\da tanode, DS-f2b82635-0df9-484f-9d12-4364a9279b20): no suitable block pools found to scan. Waiting 1814399766 ms. 17/05/13 07:18:44 INFO datanode.DataNode: Block pool Block pool BP-664414510-192 .168.8.5-1494631003212 (Datanode Uuid null) service to localhost/127.0.0.1:9000 successfully registered with NN 17/05/13 07:18:44 INFO datanode.DataNode: For namenode localhost/127.0.0.1:9000 using DELETEREPORT_INTERVAL of 300000 msec BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000 17/05/13 07:18:44 INFO datanode.DataNode: Namenode Block pool BP-664414510-192.1 68.8.5-1494631003212 (Datanode Uuid e6e53ca9-b788-4c1c-9308-29b31be28705) servic e to localhost/127.0.0.1:9000 trying to claim ACTIVE state with txid=1 17/05/13 07:18:44 INFO datanode.DataNode: Acknowledging ACTIVE Namenode Block po ol BP-664414510-192.168.8.5-1494631003212 (Datanode Uuid e6e53ca9-b788-4c1c-9308 -29b31be28705) service to localhost/127.0.0.1:9000 17/05/13 07:18:44 INFO datanode.DataNode: Successfully sent block report 0x20e81 034dafa, containing 1 storage report(s), of which we sent 1. The reports had 0 total blocks and used 1 RPC(s). This took 5 msec to generate and 91 msecs for RP C and NN processing. Got back one command: FinalizeCommand/5. 17/05/13 07:18:44 INFO datanode.DataNode: Got finalize command for block pool BP -664414510-192.168.8.5-1494631003212
NameNode
************************************************************/
17/05/13 07:18:24 INFO namenode.NameNode: createNameNode []
17/05/13 07:18:26 INFO impl.MetricsConfig: loaded properties from hadoop-metrics
2.properties
17/05/13 07:18:26 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 s
econd(s).
17/05/13 07:18:26 INFO impl.MetricsSystemImpl: NameNode metrics system started
17/05/13 07:18:26 INFO namenode.NameNode: fs.defaultFS is hdfs://localhost:9000
17/05/13 07:18:26 INFO namenode.NameNode: Clients are to use localhost:9000 to a
ccess this namenode/service.
17/05/13 07:18:28 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0
.0.0:50070
17/05/13 07:18:28 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter
(org.mortbay.log) via org.mortbay.log.Slf4jLog
17/05/13 07:18:28 INFO server.AuthenticationFilter: Unable to initialize FileSig
nerSecretProvider, falling back to use random secrets.
17/05/13 07:18:28 INFO http.HttpRequestLog: Http request log for http.requests.n
amenode is not defined
17/05/13 07:18:28 INFO http.HttpServer2: Added global filter ‘safety‘ (class=org
.apache.hadoop.http.HttpServer2$QuotingInputFilter)
17/05/13 07:18:28 INFO http.HttpServer2: Added filter static_user_filter (class=
org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
17/05/13 07:18:28 INFO http.HttpServer2: Added filter static_user_filter (class=
org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
17/05/13 07:18:28 INFO http.HttpServer2: Added filter static_user_filter (class=
org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context stat
ic
17/05/13 07:18:28 INFO http.HttpServer2: Added filter ‘org.apache.hadoop.hdfs.we
b.AuthFilter‘ (class=org.apache.hadoop.hdfs.web.AuthFilter)
17/05/13 07:18:28 INFO http.HttpServer2: addJerseyResourcePackage: packageName=o
rg.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.r
esources, pathSpec=/webhdfs/v1/*
17/05/13 07:18:28 INFO http.HttpServer2: Jetty bound to port 50070
17/05/13 07:18:28 INFO mortbay.log: jetty-6.1.26
17/05/13 07:18:31 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWi
[email protected]:50070
17/05/13 07:18:31 WARN namenode.FSNamesystem: Only one image storage directory (
dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant
storage directories!
17/05/13 07:18:31 WARN namenode.FSNamesystem: Only one namespace edits storage d
irectory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of
redundant storage directories!
17/05/13 07:18:31 INFO namenode.FSNamesystem: No KeyProvider found.
17/05/13 07:18:31 INFO namenode.FSNamesystem: fsLock is fair:true
17/05/13 07:18:31 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.lim
it=1000
17/05/13 07:18:31 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.re
gistration.ip-hostname-check=true
17/05/13 07:18:31 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.
block.deletion.sec is set to 000:00:00:00.000
17/05/13 07:18:31 INFO blockmanagement.BlockManager: The block deletion will sta
rt around 2017 五月 13 07:18:31
17/05/13 07:18:31 INFO util.GSet: Computing capacity for map BlocksMap
17/05/13 07:18:31 INFO util.GSet: VM type = 64-bit
17/05/13 07:18:31 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
17/05/13 07:18:31 INFO util.GSet: capacity = 2^21 = 2097152 entries
17/05/13 07:18:31 INFO blockmanagement.BlockManager: dfs.block.access.token.enab
le=false
17/05/13 07:18:31 INFO blockmanagement.BlockManager: defaultReplication
= 1
17/05/13 07:18:31 INFO blockmanagement.BlockManager: maxReplication
= 512
17/05/13 07:18:31 INFO blockmanagement.BlockManager: minReplication
= 1
17/05/13 07:18:31 INFO blockmanagement.BlockManager: maxReplicationStreams
= 2
17/05/13 07:18:31 INFO blockmanagement.BlockManager: replicationRecheckInterval
= 3000
17/05/13 07:18:31 INFO blockmanagement.BlockManager: encryptDataTransfer
= false
17/05/13 07:18:31 INFO blockmanagement.BlockManager: maxNumBlocksToLog
= 1000
17/05/13 07:18:31 INFO namenode.FSNamesystem: fsOwner = Administrato
r (auth:SIMPLE)
17/05/13 07:18:31 INFO namenode.FSNamesystem: supergroup = supergroup
17/05/13 07:18:31 INFO namenode.FSNamesystem: isPermissionEnabled = true
17/05/13 07:18:31 INFO namenode.FSNamesystem: HA Enabled: false
17/05/13 07:18:31 INFO namenode.FSNamesystem: Append Enabled: true
17/05/13 07:18:32 INFO util.GSet: Computing capacity for map INodeMap
17/05/13 07:18:32 INFO util.GSet: VM type = 64-bit
17/05/13 07:18:32 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
17/05/13 07:18:32 INFO util.GSet: capacity = 2^20 = 1048576 entries
17/05/13 07:18:32 INFO namenode.FSDirectory: ACLs enabled? false
17/05/13 07:18:32 INFO namenode.FSDirectory: XAttrs enabled? true
17/05/13 07:18:32 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
17/05/13 07:18:32 INFO namenode.NameNode: Caching file names occuring more than
10 times
17/05/13 07:18:32 INFO util.GSet: Computing capacity for map cachedBlocks
17/05/13 07:18:32 INFO util.GSet: VM type = 64-bit
17/05/13 07:18:32 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
17/05/13 07:18:32 INFO util.GSet: capacity = 2^18 = 262144 entries
17/05/13 07:18:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pc
t = 0.9990000128746033
17/05/13 07:18:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanode
s = 0
17/05/13 07:18:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension
= 30000
17/05/13 07:18:32 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.n
um.buckets = 10
17/05/13 07:18:32 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.user
s = 10
17/05/13 07:18:32 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.
minutes = 1,5,25
17/05/13 07:18:32 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17/05/13 07:18:32 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total
heap and retry cache entry expiry time is 600000 millis
17/05/13 07:18:33 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/05/13 07:18:33 INFO util.GSet: VM type = 64-bit
17/05/13 07:18:33 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.
1 KB
17/05/13 07:18:33 INFO util.GSet: capacity = 2^15 = 32768 entries
17/05/13 07:18:33 INFO common.Storage: Lock on \hadoop\data\dfs\namenode\in_use.
lock acquired by nodename [email protected]
17/05/13 07:18:34 INFO namenode.FileJournalManager: Recovering unfinalized segme
nts in \hadoop\data\dfs\namenode\current
17/05/13 07:18:34 INFO namenode.FSImage: No edit log streams selected.
17/05/13 07:18:34 INFO namenode.FSImageFormatPBINode: Loading 1 INodes.
17/05/13 07:18:34 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 0 secon
ds.
17/05/13 07:18:34 INFO namenode.FSImage: Loaded image for txid 0 from \hadoop\da
ta\dfs\namenode\current\fsimage_0000000000000000000
17/05/13 07:18:34 INFO namenode.FSNamesystem: Need to save fs image? false (stal
eImage=false, haEnabled=false, isRollingUpgrade=false)
17/05/13 07:18:34 INFO namenode.FSEditLog: Starting log segment at 1
17/05/13 07:18:34 INFO namenode.NameCache: initialized with 0 entries 0 lookups
17/05/13 07:18:35 INFO namenode.FSNamesystem: Finished loading FSImage in 1331 m
secs
17/05/13 07:18:36 INFO namenode.NameNode: RPC server is binding to localhost:900
0
17/05/13 07:18:36 INFO ipc.CallQueueManager: Using callQueue class java.util.con
current.LinkedBlockingQueue
17/05/13 07:18:36 INFO namenode.FSNamesystem: Registered FSNamesystemState MBean
17/05/13 07:18:36 INFO ipc.Server: Starting Socket Reader #1 for port 9000
17/05/13 07:18:36 INFO namenode.LeaseManager: Number of blocks under constructio
n: 0
17/05/13 07:18:36 INFO namenode.LeaseManager: Number of blocks under constructio
n: 0
17/05/13 07:18:36 INFO namenode.FSNamesystem: initializing replication queues
17/05/13 07:18:36 INFO hdfs.StateChange: STATE* Leaving safe mode after 5 secs
17/05/13 07:18:36 INFO hdfs.StateChange: STATE* Network topology has 0 racks and
0 datanodes
17/05/13 07:18:36 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 bloc
ks
17/05/13 07:18:36 INFO blockmanagement.DatanodeDescriptor: Number of failed stor
age changes from 0 to 0
17/05/13 07:18:37 INFO blockmanagement.BlockManager: Total number of blocks
= 0
17/05/13 07:18:37 INFO blockmanagement.BlockManager: Number of invalid blocks
= 0
17/05/13 07:18:37 INFO blockmanagement.BlockManager: Number of under-replicated
blocks = 0
17/05/13 07:18:37 INFO blockmanagement.BlockManager: Number of over-replicated
blocks = 0
17/05/13 07:18:37 INFO blockmanagement.BlockManager: Number of blocks being writ
ten = 0
17/05/13 07:18:37 INFO hdfs.StateChange: STATE* Replication Queue initialization
scan for invalid, over- and under-replicated blocks completed in 98 msec
17/05/13 07:18:37 INFO namenode.NameNode: NameNode RPC up at: localhost/127.0.0.
1:9000
17/05/13 07:18:37 INFO namenode.FSNamesystem: Starting services required for act
ive state
17/05/13 07:18:37 INFO ipc.Server: IPC Server Responder: starting
17/05/13 07:18:37 INFO ipc.Server: IPC Server listener on 9000: starting
17/05/13 07:18:37 INFO blockmanagement.CacheReplicationMonitor: Starting CacheRe
plicationMonitor with interval 30000 milliseconds
17/05/13 07:18:44 INFO hdfs.StateChange: BLOCK* registerDatanode: from DatanodeR
egistration(127.0.0.1:50010, datanodeUuid=e6e53ca9-b788-4c1c-9308-29b31be28705,
infoPort=50075, infoSecurePort=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-1284
c5d0-592a-4a41-b185-e53fb57dcfbf;nsid=61861794;c=0) storage e6e53ca9-b788-4c1c-9
308-29b31be28705
17/05/13 07:18:44 INFO blockmanagement.DatanodeDescriptor: Number of failed stor
age changes from 0 to 0
17/05/13 07:18:44 INFO net.NetworkTopology: Adding a new node: /default-rack/127
.0.0.1:50010
17/05/13 07:18:44 INFO blockmanagement.DatanodeDescriptor: Number of failed stor
age changes from 0 to 0
17/05/13 07:18:44 INFO blockmanagement.DatanodeDescriptor: Adding new storage ID
DS-f2b82635-0df9-484f-9d12-4364a9279b20 for DN 127.0.0.1:50010
17/05/13 07:18:44 INFO BlockStateChange: BLOCK* processReport: from storage DS-f
2b82635-0df9-484f-9d12-4364a9279b20 node DatanodeRegistration(127.0.0.1:50010, d
atanodeUuid=e6e53ca9-b788-4c1c-9308-29b31be28705, infoPort=50075, infoSecurePort
=0, ipcPort=50020, storageInfo=lv=-56;cid=CID-1284c5d0-592a-4a41-b185-e53fb57dcf
bf;nsid=61861794;c=0), blocks: 0, hasStaleStorage: false, processing time: 2 mse
cs
NodeManager
************************************************************/ 17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoo p.yarn.server.nodemanager.containermanager.container.ContainerEventType for clas s org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImp l$ContainerEventDispatcher 17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoo p.yarn.server.nodemanager.containermanager.application.ApplicationEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManage rImpl$ApplicationEventDispatcher 17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoo p.yarn.server.nodemanager.containermanager.localizer.event.LocalizationEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer. ResourceLocalizationService 17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoo p.yarn.server.nodemanager.containermanager.AuxServicesEventType for class org.ap ache.hadoop.yarn.server.nodemanager.containermanager.AuxServices 17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoo p.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorEventType fo r class org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.Conta inersMonitorImpl 17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoo p.yarn.server.nodemanager.containermanager.launcher.ContainersLauncherEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.Co ntainersLauncher 17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoo p.yarn.server.nodemanager.ContainerManagerEventType for class org.apache.hadoop. yarn.server.nodemanager.containermanager.ContainerManagerImpl 17/05/13 07:18:33 INFO event.AsyncDispatcher: Registering class org.apache.hadoo p.yarn.server.nodemanager.NodeManagerEventType for class org.apache.hadoop.yarn. server.nodemanager.NodeManager 17/05/13 07:18:34 INFO impl.MetricsConfig: loaded properties from hadoop-metrics 2.properties 17/05/13 07:18:34 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 s econd(s). 17/05/13 07:18:34 INFO impl.MetricsSystemImpl: NodeManager metrics system starte d 17/05/13 07:18:34 INFO event.AsyncDispatcher: Registering class org.apache.hadoo p.yarn.server.nodemanager.containermanager.loghandler.event.LogHandlerEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager.loghandler. NonAggregatingLogHandler 17/05/13 07:18:34 INFO event.AsyncDispatcher: Registering class org.apache.hadoo p.yarn.server.nodemanager.containermanager.localizer.sharedcache.SharedCacheUplo adEventType for class org.apache.hadoop.yarn.server.nodemanager.containermanager .localizer.sharedcache.SharedCacheUploadService 17/05/13 07:18:34 INFO localizer.ResourceLocalizationService: per directory file limit = 8192 17/05/13 07:18:43 INFO event.AsyncDispatcher: Registering class org.apache.hadoo p.yarn.server.nodemanager.containermanager.localizer.event.LocalizerEventType fo r class org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.Res ourceLocalizationService$LocalizerTracker 17/05/13 07:18:44 WARN containermanager.AuxServices: The Auxilurary Service name d ‘mapreduce_shuffle‘ in the configuration is for class org.apache.hadoop.mapred .ShuffleHandler which has a name of ‘httpshuffle‘. Because these are not the sam e tools trying to send ServiceData and read Service Meta Data may have issues un less the refer to the name in the config. 17/05/13 07:18:44 INFO containermanager.AuxServices: Adding auxiliary service ht tpshuffle, "mapreduce_shuffle" 17/05/13 07:18:44 INFO monitor.ContainersMonitorImpl: Using ResourceCalculatorP lugin : [email protected]03eb 17/05/13 07:18:44 INFO monitor.ContainersMonitorImpl: Using ResourceCalculatorP rocessTree : null 17/05/13 07:18:44 INFO monitor.ContainersMonitorImpl: Physical memory check enab led: true 17/05/13 07:18:44 INFO monitor.ContainersMonitorImpl: Virtual memory check enabl ed: true 17/05/13 07:18:44 WARN monitor.ContainersMonitorImpl: NodeManager configured wit h 8 G physical memory allocated to containers, which is more than 80% of the tot al physical memory available (5.6 G). Thrashing might happen. 17/05/13 07:18:44 INFO nodemanager.NodeStatusUpdaterImpl: Initialized nodemanage r for null: physical-memory=8192 virtual-memory=17204 virtual-cores=8 17/05/13 07:18:44 INFO ipc.CallQueueManager: Using callQueue class java.util.con current.LinkedBlockingQueue 17/05/13 07:18:44 INFO ipc.Server: Starting Socket Reader #1 for port 53137 17/05/13 07:18:44 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.had oop.yarn.api.ContainerManagementProtocolPB to the server 17/05/13 07:18:44 INFO containermanager.ContainerManagerImpl: Blocking new conta iner-requests as container manager rpc server is still starting. 17/05/13 07:18:44 INFO ipc.Server: IPC Server Responder: starting 17/05/13 07:18:44 INFO ipc.Server: IPC Server listener on 53137: starting 17/05/13 07:18:44 INFO security.NMContainerTokenSecretManager: Updating node add ress : wulinfeng:53137 17/05/13 07:18:44 INFO ipc.CallQueueManager: Using callQueue class java.util.con current.LinkedBlockingQueue 17/05/13 07:18:44 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.had oop.yarn.server.nodemanager.api.LocalizationProtocolPB to the server 17/05/13 07:18:44 INFO ipc.Server: IPC Server listener on 8040: starting 17/05/13 07:18:44 INFO ipc.Server: Starting Socket Reader #1 for port 8040 17/05/13 07:18:44 INFO localizer.ResourceLocalizationService: Localizer started on port 8040 17/05/13 07:18:44 INFO ipc.Server: IPC Server Responder: starting 17/05/13 07:18:44 INFO mapred.IndexCache: IndexCache created with max memory = 1 0485760 17/05/13 07:18:44 INFO mapred.ShuffleHandler: httpshuffle listening on port 1356 2 17/05/13 07:18:44 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree curre ntly is supported only on Linux. 17/05/13 07:18:45 INFO containermanager.ContainerManagerImpl: ContainerManager s tarted at wulinfeng/192.168.8.5:53137 17/05/13 07:18:45 INFO containermanager.ContainerManagerImpl: ContainerManager b ound to 0.0.0.0/0.0.0.0:0 17/05/13 07:18:45 INFO webapp.WebServer: Instantiating NMWebApp at 0.0.0.0:8042 17/05/13 07:18:45 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter (org.mortbay.log) via org.mortbay.log.Slf4jLog 17/05/13 07:18:45 INFO server.AuthenticationFilter: Unable to initialize FileSig nerSecretProvider, falling back to use random secrets. 17/05/13 07:18:45 INFO http.HttpRequestLog: Http request log for http.requests.n odemanager is not defined 17/05/13 07:18:45 INFO http.HttpServer2: Added global filter ‘safety‘ (class=org .apache.hadoop.http.HttpServer2$QuotingInputFilter) 17/05/13 07:18:45 INFO http.HttpServer2: Added filter static_user_filter (class= org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context node 17/05/13 07:18:45 INFO http.HttpServer2: Added filter static_user_filter (class= org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 17/05/13 07:18:45 INFO http.HttpServer2: Added filter static_user_filter (class= org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context stat ic 17/05/13 07:18:45 INFO http.HttpServer2: adding path spec: /node/* 17/05/13 07:18:45 INFO http.HttpServer2: adding path spec: /ws/* 17/05/13 07:18:46 INFO webapp.WebApps: Registered webapp guice modules 17/05/13 07:18:46 INFO http.HttpServer2: Jetty bound to port 8042 17/05/13 07:18:46 INFO mortbay.log: jetty-6.1.26 17/05/13 07:18:46 INFO mortbay.log: Extract jar:file:/D:/hadoop-2.7.2/share/hado op/yarn/hadoop-yarn-common-2.7.2.jar!/webapps/node to C:\Users\ADMINI~1\AppDataLocal\Temp\Jetty_0_0_0_0_8042_node____19tj0x\webapp 五月 13, 2017 7:18:47 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory register 信息: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices as a root resource class 五月 13, 2017 7:18:47 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory register 信息: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a pro vider class 五月 13, 2017 7:18:47 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory register 信息: Registering org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextRe solver as a provider class 五月 13, 2017 7:18:47 上午 com.sun.jersey.server.impl.application.WebApplication Impl _initiate 信息: Initiating Jersey application, version ‘Jersey: 1.9 09/02/2011 11:17 AM‘ 五月 13, 2017 7:18:47 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory getComponentProvider 信息: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.JAXBContextResolv er to GuiceManagedComponentProvider with the scope "Singleton" 五月 13, 2017 7:18:47 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory getComponentProvider 信息: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceMana gedComponentProvider with the scope "Singleton" 五月 13, 2017 7:18:48 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory getComponentProvider 信息: Binding org.apache.hadoop.yarn.server.nodemanager.webapp.NMWebServices to GuiceManagedComponentProvider with the scope "Singleton" 17/05/13 07:18:48 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWi [email protected]:8042 17/05/13 07:18:48 INFO webapp.WebApps: Web app node started at 8042 17/05/13 07:18:49 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0 :8031 17/05/13 07:18:49 INFO nodemanager.NodeStatusUpdaterImpl: Sending out 0 NM conta iner statuses: [] 17/05/13 07:18:49 INFO nodemanager.NodeStatusUpdaterImpl: Registering with RM us ing containers :[] 17/05/13 07:18:49 INFO security.NMContainerTokenSecretManager: Rolling master-ke y for container-tokens, got key with id -610858047 17/05/13 07:18:49 INFO security.NMTokenSecretManagerInNM: Rolling master-key for container-tokens, got key with id 2017302061 17/05/13 07:18:49 INFO nodemanager.NodeStatusUpdaterImpl: Registered with Resour ceManager as wulinfeng:53137 with total resource of <memory:8192, vCores:8> 17/05/13 07:18:49 INFO nodemanager.NodeStatusUpdaterImpl: Notifying ContainerMan ager to unblock new container-requests
ResourceManager
************************************************************/ 17/05/13 07:18:19 INFO conf.Configuration: found resource core-site.xml at file: /D:/hadoop-2.7.2/etc/hadoop/core-site.xml 17/05/13 07:18:20 INFO security.Groups: clearing userToGroupsMap cache 17/05/13 07:18:21 INFO conf.Configuration: found resource yarn-site.xml at file: /D:/hadoop-2.7.2/etc/hadoop/yarn-site.xml 17/05/13 07:18:21 INFO event.AsyncDispatcher: Registering class org.apache.hadoo p.yarn.server.resourcemanager.RMFatalEventType for class org.apache.hadoop.yarn. server.resourcemanager.ResourceManager$RMFatalEventDispatcher 17/05/13 07:18:29 INFO security.NMTokenSecretManagerInRM: NMTokenKeyRollingInter val: 86400000ms and NMTokenKeyActivationDelay: 900000ms 17/05/13 07:18:29 INFO security.RMContainerTokenSecretManager: ContainerTokenKey RollingInterval: 86400000ms and ContainerTokenKeyActivationDelay: 900000ms 17/05/13 07:18:29 INFO security.AMRMTokenSecretManager: AMRMTokenKeyRollingInter val: 86400000ms and AMRMTokenKeyActivationDelay: 900000 ms 17/05/13 07:18:29 INFO event.AsyncDispatcher: Registering class org.apache.hadoo p.yarn.server.resourcemanager.recovery.RMStateStoreEventType for class org.apach e.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandle r 17/05/13 07:18:29 INFO event.AsyncDispatcher: Registering class org.apache.hadoo p.yarn.server.resourcemanager.NodesListManagerEventType for class org.apache.had oop.yarn.server.resourcemanager.NodesListManager 17/05/13 07:18:29 INFO resourcemanager.ResourceManager: Using Scheduler: org.apa che.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler 17/05/13 07:18:29 INFO event.AsyncDispatcher: Registering class org.apache.hadoo p.yarn.server.resourcemanager.scheduler.event.SchedulerEventType for class org.a pache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatche r 17/05/13 07:18:29 INFO event.AsyncDispatcher: Registering class org.apache.hadoo p.yarn.server.resourcemanager.rmapp.RMAppEventType for class org.apache.hadoop.y arn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher 17/05/13 07:18:29 INFO event.AsyncDispatcher: Registering class org.apache.hadoo p.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptEventType for class org. apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEven tDispatcher 17/05/13 07:18:29 INFO event.AsyncDispatcher: Registering class org.apache.hadoo p.yarn.server.resourcemanager.rmnode.RMNodeEventType for class org.apache.hadoop .yarn.server.resourcemanager.ResourceManager$NodeEventDispatcher 17/05/13 07:18:29 INFO impl.MetricsConfig: loaded properties from hadoop-metrics 2.properties 17/05/13 07:18:30 INFO impl.MetricsSystemImpl: Scheduled snapshot period at 10 s econd(s). 17/05/13 07:18:30 INFO impl.MetricsSystemImpl: ResourceManager metrics system st arted 17/05/13 07:18:30 INFO event.AsyncDispatcher: Registering class org.apache.hadoo p.yarn.server.resourcemanager.RMAppManagerEventType for class org.apache.hadoop. yarn.server.resourcemanager.RMAppManager 17/05/13 07:18:30 INFO event.AsyncDispatcher: Registering class org.apache.hadoo p.yarn.server.resourcemanager.amlauncher.AMLauncherEventType for class org.apach e.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher 17/05/13 07:18:30 INFO resourcemanager.RMNMInfo: Registered RMNMInfo MBean 17/05/13 07:18:30 INFO security.YarnAuthorizationProvider: org.apache.hadoop.yar n.security.ConfiguredYarnAuthorizer is instiantiated. 17/05/13 07:18:30 INFO util.HostsFileReader: Refreshing hosts (include/exclude) list 17/05/13 07:18:30 INFO conf.Configuration: found resource capacity-scheduler.xml at file:/D:/hadoop-2.7.2/etc/hadoop/capacity-scheduler.xml 17/05/13 07:18:30 INFO capacity.CapacitySchedulerConfiguration: max alloc mb per queue for root is undefined 17/05/13 07:18:30 INFO capacity.CapacitySchedulerConfiguration: max alloc vcore per queue for root is undefined 17/05/13 07:18:30 INFO capacity.ParentQueue: root, capacity=1.0, asboluteCapacit y=1.0, maxCapacity=1.0, asboluteMaxCapacity=1.0, state=RUNNING, acls=SUBMIT_APP: *ADMINISTER_QUEUE:*, labels=*, , reservationsContinueLooking=true 17/05/13 07:18:30 INFO capacity.ParentQueue: Initialized parent-queue root name= root, fullname=root 17/05/13 07:18:30 INFO capacity.CapacitySchedulerConfiguration: max alloc mb per queue for root.default is undefined 17/05/13 07:18:30 INFO capacity.CapacitySchedulerConfiguration: max alloc vcore per queue for root.default is undefined 17/05/13 07:18:30 INFO capacity.LeafQueue: Initializing default capacity = 1.0 [= (float) configuredCapacity / 100 ] asboluteCapacity = 1.0 [= parentAbsoluteCapacity * capacity ] maxCapacity = 1.0 [= configuredMaxCapacity ] absoluteMaxCapacity = 1.0 [= 1.0 maximumCapacity undefined, (parentAbsoluteMaxCa pacity * maximumCapacity) / 100 otherwise ] userLimit = 100 [= configuredUserLimit ] userLimitFactor = 1.0 [= configuredUserLimitFactor ] maxApplications = 10000 [= configuredMaximumSystemApplicationsPerQueue or (int)( configuredMaximumSystemApplications * absoluteCapacity)] maxApplicationsPerUser = 10000 [= (int)(maxApplications * (userLimit / 100.0f) * userLimitFactor) ] usedCapacity = 0.0 [= usedResourcesMemory / (clusterResourceMemory * absoluteCap acity)] absoluteUsedCapacity = 0.0 [= usedResourcesMemory / clusterResourceMemory] maxAMResourcePerQueuePercent = 0.1 [= configuredMaximumAMResourcePercent ] minimumAllocationFactor = 0.875 [= (float)(maximumAllocationMemory - minimumAllo cationMemory) / maximumAllocationMemory ] maximumAllocation = <memory:8192, vCores:32> [= configuredMaxAllocation ] numContainers = 0 [= currentNumContainers ] state = RUNNING [= configuredState ] acls = SUBMIT_APP:*ADMINISTER_QUEUE:* [= configuredAcls ] nodeLocalityDelay = 40 labels=*, nodeLocalityDelay = 40 reservationsContinueLooking = true preemptionDisabled = true 17/05/13 07:18:30 INFO capacity.CapacityScheduler: Initialized queue: default: c apacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapac ity=0.0, absoluteUsedCapacity=0.0, numApps=0, numContainers=0 17/05/13 07:18:30 INFO capacity.CapacityScheduler: Initialized queue: root: numC hildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCore s:0>usedCapacity=0.0, numApps=0, numContainers=0 17/05/13 07:18:30 INFO capacity.CapacityScheduler: Initialized root queue root: numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, v Cores:0>usedCapacity=0.0, numApps=0, numContainers=0 17/05/13 07:18:30 INFO capacity.CapacityScheduler: Initialized queue mappings, o verride: false 17/05/13 07:18:30 INFO capacity.CapacityScheduler: Initialized CapacityScheduler with calculator=class org.apache.hadoop.yarn.util.resource.DefaultResourceCalcu lator, minimumAllocation=<<memory:1024, vCores:1>>, maximumAllocation=<<memory:8 192, vCores:32>>, asynchronousScheduling=false, asyncScheduleInterval=5ms 17/05/13 07:18:30 INFO metrics.SystemMetricsPublisher: YARN system metrics publi shing service is not enabled 17/05/13 07:18:30 INFO resourcemanager.ResourceManager: Transitioning to active state 17/05/13 07:18:31 INFO recovery.RMStateStore: Updating AMRMToken 17/05/13 07:18:31 INFO security.RMContainerTokenSecretManager: Rolling master-ke y for container-tokens 17/05/13 07:18:31 INFO security.NMTokenSecretManagerInRM: Rolling master-key for nm-tokens 17/05/13 07:18:31 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens 17/05/13 07:18:31 INFO security.RMDelegationTokenSecretManager: storing master k ey with keyID 1 17/05/13 07:18:31 INFO recovery.RMStateStore: Storing RMDTMasterKey. 17/05/13 07:18:31 INFO event.AsyncDispatcher: Registering class org.apache.hadoo p.yarn.nodelabels.event.NodeLabelsStoreEventType for class org.apache.hadoop.yar n.nodelabels.CommonNodeLabelsManager$ForwardingEventHandler 17/05/13 07:18:31 INFO delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s) 17/05/13 07:18:31 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens 17/05/13 07:18:31 INFO security.RMDelegationTokenSecretManager: storing master k ey with keyID 2 17/05/13 07:18:31 INFO recovery.RMStateStore: Storing RMDTMasterKey. 17/05/13 07:18:31 INFO ipc.CallQueueManager: Using callQueue class java.util.con current.LinkedBlockingQueue 17/05/13 07:18:31 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.had oop.yarn.server.api.ResourceTrackerPB to the server 17/05/13 07:18:31 INFO ipc.Server: Starting Socket Reader #1 for port 8031 17/05/13 07:18:32 INFO ipc.Server: IPC Server listener on 8031: starting 17/05/13 07:18:32 INFO ipc.Server: IPC Server Responder: starting 17/05/13 07:18:32 INFO ipc.CallQueueManager: Using callQueue class java.util.con current.LinkedBlockingQueue 17/05/13 07:18:33 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.had oop.yarn.api.ApplicationMasterProtocolPB to the server 17/05/13 07:18:33 INFO ipc.Server: IPC Server listener on 8030: starting 17/05/13 07:18:33 INFO ipc.CallQueueManager: Using callQueue class java.util.con current.LinkedBlockingQueue 17/05/13 07:18:33 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.had oop.yarn.api.ApplicationClientProtocolPB to the server 17/05/13 07:18:33 INFO resourcemanager.ResourceManager: Transitioned to active s tate 17/05/13 07:18:33 INFO ipc.Server: IPC Server listener on 8032: starting 17/05/13 07:18:33 INFO ipc.Server: Starting Socket Reader #1 for port 8030 17/05/13 07:18:33 INFO ipc.Server: IPC Server Responder: starting 17/05/13 07:18:34 INFO ipc.Server: Starting Socket Reader #1 for port 8032 17/05/13 07:18:34 INFO ipc.Server: IPC Server Responder: starting 17/05/13 07:18:34 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter (org.mortbay.log) via org.mortbay.log.Slf4jLog 17/05/13 07:18:34 INFO server.AuthenticationFilter: Unable to initialize FileSig nerSecretProvider, falling back to use random secrets. 17/05/13 07:18:34 INFO http.HttpRequestLog: Http request log for http.requests.r esourcemanager is not defined 17/05/13 07:18:34 INFO http.HttpServer2: Added global filter ‘safety‘ (class=org .apache.hadoop.http.HttpServer2$QuotingInputFilter) 17/05/13 07:18:34 INFO http.HttpServer2: Added filter RMAuthenticationFilter (cl ass=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to conte xt cluster 17/05/13 07:18:34 INFO http.HttpServer2: Added filter RMAuthenticationFilter (cl ass=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to conte xt static 17/05/13 07:18:34 INFO http.HttpServer2: Added filter RMAuthenticationFilter (cl ass=org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter) to conte xt logs 17/05/13 07:18:34 INFO http.HttpServer2: Added filter static_user_filter (class= org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context clus ter 17/05/13 07:18:34 INFO http.HttpServer2: Added filter static_user_filter (class= org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context stat ic 17/05/13 07:18:34 INFO http.HttpServer2: Added filter static_user_filter (class= org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 17/05/13 07:18:34 INFO http.HttpServer2: adding path spec: /cluster/* 17/05/13 07:18:34 INFO http.HttpServer2: adding path spec: /ws/* 17/05/13 07:18:35 INFO webapp.WebApps: Registered webapp guice modules 17/05/13 07:18:35 INFO http.HttpServer2: Jetty bound to port 8088 17/05/13 07:18:35 INFO mortbay.log: jetty-6.1.26 17/05/13 07:18:35 INFO mortbay.log: Extract jar:file:/D:/hadoop-2.7.2/share/hado op/yarn/hadoop-yarn-common-2.7.2.jar!/webapps/cluster to C:\Users\ADMINI~1\AppDa ta\Local\Temp\Jetty_0_0_0_0_8088_cluster____u0rgz3\webapp 17/05/13 07:18:36 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens 17/05/13 07:18:36 INFO delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s) 17/05/13 07:18:36 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens 五月 13, 2017 7:18:36 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory register 信息: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBConte xtResolver as a provider class 五月 13, 2017 7:18:36 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory register 信息: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServ ices as a root resource class 五月 13, 2017 7:18:36 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory register 信息: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a pro vider class 五月 13, 2017 7:18:36 上午 com.sun.jersey.server.impl.application.WebApplication Impl _initiate 信息: Initiating Jersey application, version ‘Jersey: 1.9 09/02/2011 11:17 AM‘ 五月 13, 2017 7:18:37 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory getComponentProvider 信息: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextRe solver to GuiceManagedComponentProvider with the scope "Singleton" 五月 13, 2017 7:18:38 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory getComponentProvider 信息: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceMana gedComponentProvider with the scope "Singleton" 五月 13, 2017 7:18:40 上午 com.sun.jersey.guice.spi.container.GuiceComponentProv iderFactory getComponentProvider 信息: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices to GuiceManagedComponentProvider with the scope "Singleton" 17/05/13 07:18:41 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWi [email protected]:8088 17/05/13 07:18:41 INFO webapp.WebApps: Web app cluster started at 8088 17/05/13 07:18:41 INFO ipc.CallQueueManager: Using callQueue class java.util.con current.LinkedBlockingQueue 17/05/13 07:18:41 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.had oop.yarn.server.api.ResourceManagerAdministrationProtocolPB to the server 17/05/13 07:18:41 INFO ipc.Server: IPC Server listener on 8033: starting 17/05/13 07:18:41 INFO ipc.Server: IPC Server Responder: starting 17/05/13 07:18:41 INFO ipc.Server: Starting Socket Reader #1 for port 8033 17/05/13 07:18:49 INFO util.RackResolver: Resolved wulinfeng to /default-rack 17/05/13 07:18:49 INFO resourcemanager.ResourceTrackerService: NodeManager from node wulinfeng(cmPort: 53137 httpPort: 8042) registered with capability: <memory :8192, vCores:8>, assigned nodeId wulinfeng:53137 17/05/13 07:18:49 INFO rmnode.RMNodeImpl: wulinfeng:53137 Node Transitioned from NEW to RUNNING 17/05/13 07:18:49 INFO capacity.CapacityScheduler: Added node wulinfeng:53137 cl usterResource: <memory:8192, vCores:8> 17/05/13 07:28:30 INFO scheduler.AbstractYarnScheduler: Release request cache is cleaned up
以上是关于windows下安装并启动hadoop2.7.2的主要内容,如果未能解决你的问题,请参考以下文章
hadoop运行环境搭建hadoop2.7.2安装Hadoop目录结构