hadoop单节点安装

Posted eosclover

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了hadoop单节点安装相关的知识,希望对你有一定的参考价值。

java环境变量=====================================
export JAVA_HOME=/home/test/setupPackage/jdk1.7.0_67
export JRE_HOME=/home/test/setupPackage/jdk1.7.0_67/jre
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar:$JRE_HOME/lib
hadoop环境变量===============================================
export HADOOP_HOME=/home/test/setupPackage/hadoop-2.7.3
export ZOOKEEPER_HOME=/home/test/setupPackage/zookeeper-3.4.6
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin

dataDir=/home/test/setupPackage/zookeeper-3.4.6/zooData/zoodata
dataLogDir=/home/test/setupPackage/zookeeper-3.4.6/zooData/zoolog

/home/test/setupPackage/jdk1.7.0_67

hadoop-env.sh================================================
#export JAVA_HOME=$JAVA_HOME
export JAVA_HOME=/home/test/setupPackage/jdk1.7.0_67
yarn-env.sh==============================================
#在 JAVA=$JAVA_HOME/bin/java    之前加一行
export JAVA_HOME=/home/test/setupPackage/jdk1.7.0_67
core-site.xml=======================================================
<configuration> 
<!-- 指定HDFS老大(namenode)的通信地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://singleNode:9000</value>
</property>
<!-- 指定hadoop运行时产生文件的存储路径 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/home/test/setupPackage/hadoop-2.7.3/tmp</value>
</property>
</configuration>

hdfs-site.xml==========================================================
<configuration>
<property>
<name>dfs.name.dir</name>
<value>/home/test/setupPackage/hadoop-2.7.3/hdfs/name</value>
<description>namenode上存储hdfs名字空间元数据 </description>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/test/setupPackage/hadoop-2.7.3/hdfs/data</value>
<description>datanode上数据块的物理存储位置</description>
</property>
<!-- 设置hdfs副本数量 -->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property> 
<name>dfs.namenode.http-address</name> 
<value>singleNode:50070</value> 
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
mapred-site.xml========================================================
<configuration>
<!-- 通知框架MR使用YARN -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
yarn-site.xml============================================================
<configuration>
<!-- reducer取数据的方式是mapreduce_shuffle -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>singleNode:8099</value>
</property>
</configuration>
SSH免密登录===============================================================
ssh-keygen -t dsa -P ‘‘ -f ~/.ssh/id_dsa    #生成id_dsa.pub,id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys    #把id_dsa.pub内容追加到authorized_keys
chmod 0600 ~/.ssh/authorized_keys #去除当前组与其他用户的读写执行权限

hdfs启动与停止==============================================================
注第一次启动需要格式化
cd /home/test/setupPackage/hadoop-2.7.3
./bin/hdfs namenode -format
启动NameNode 和 DataNode 守护进程===============================================
启动
./sbin/start-dfs.sh
停止命令
./sbin/stop-dfs.sh
启动ResourceManager 和 NodeManager 守护进程============================================
启动
sbin/start-yarn.sh
停止
sbin/stop-yarn.sh

jps验证是否有以下进程==================================================
? NameNode
? DataNode
? Jps
? Master
? NodeManager
? ResourceManager
? SecondaryNameNode
我的问题:缺少以下进程
DataNode
日志:
2019-08-18 03:29:18,810 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid dfs.datanode.data.dir /home/test/setupPackage/h
adoop-2.7.3/hdfs/data : 
EPERM: Operation not permitted
修改目录权限:
chown 当前启动hadoop用户 HADOOP_HOME/hdfs -R
chgrp 当前启动hadoop用户 HADOOP_HOME/hdfs -R
重启hadoop:
./sbin/start-dfs.sh
./sbin/start-yarn.sh
查看进程:
[test@singleNode hadoop-2.7.3]$ jps
13746 ResourceManager
13297 NameNode
13938 Jps
13587 SecondaryNameNode
13424 DataNode
13842 NodeManager

URL验证=====================================
http://IP:50070
http://IP:8088

 

以上是关于hadoop单节点安装的主要内容,如果未能解决你的问题,请参考以下文章

大数据系列hadoop单节点安装官方文档翻译

Hadoop伪集群(单节点集群)安装(Centos7)

Hadoop伪集群(单节点集群)安装(Centos7)

Hadoop伪集群(单节点集群)安装(Centos7)

Hadoop伪集群(单节点集群)安装(Centos7)

hadoop单节点安装