Hadoop用户重新部署HDFS
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Hadoop用户重新部署HDFS相关的知识,希望对你有一定的参考价值。
前言:
在这篇文章中https://www.jianshu.com/p/eeae2f37a48c
我们使用的是root用户来部署的,在生产环境中,一般某个组件是由某个用户来启动的,本篇文章介绍下怎样用hadoop用户来重新部署伪分布式(HDFS)
1.前期准备
创建hadoop用户,及配置ssh免密登录
参考:https://www.jianshu.com/p/589bb43e02822.停止root启动的HDFS进程并删除/tmp目录下的存储文件
[[email protected] hadoop-2.8.1]# pwd /opt/software/hadoop-2.8.1 [[email protected] hadoop-2.8.1]# jps 32244 NameNode 32350 DataNode 32558 SecondaryNameNode 1791 Jps [[email protected] hadoop-2.8.1]# sbin/stop-dfs.sh Stopping namenodes on [hadoop000] hadoop000: stopping namenode localhost: stopping datanode Stopping secondary namenodes [0.0.0.0] 0.0.0.0: stopping secondarynamenode [[email protected] hadoop-2.8.1]# jps 2288 Jps [[email protected] hadoop-2.8.1]# rm -rf /tmp/hadoop-* /tmp/hsperfdata_*
3.更改文件属主
[[email protected] software]# pwd /opt/software [[email protected] software]# chown -R hadoop:hadoop hadoop-2.8.1
4.进入hadoop用户 修改相关配置文件
#第一步: [[email protected] hadoop]$ pwd /opt/software/hadoop-2.8.1/etc/hadoop [[email protected] hadoop]$ vi hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>192.168.6.217:50090</value> </property> <property> <name>dfs.namenode.secondary.https-address</name> <value>192.168.6.217:50091</value> </property> </configuration> #第二步: [[email protected] hadoop]$ vi core-site.xml <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://192.168.6.217:9000</value> </property> </configuration> #第三步: [[email protected] hadoop]# vi slaves 192.168.6.217
5.格式化和启动
[[email protected] hadoop-2.8.1]$ pwd /opt/software/hadoop-2.8.1 [[email protected] hadoop-2.8.1]$ bin/hdfs namenode -format [[email protected] hadoop-2.8.1]$ sbin/start-dfs.sh Starting namenodes on [hadoop000] hadoop000: starting namenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-namenode-hadoop000.out 192.168.6.217: starting datanode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-datanode-hadoop000.out Starting secondary namenodes [hadoop000] hadoop000: starting secondarynamenode, logging to /opt/software/hadoop-2.8.1/logs/hadoop-hadoop-secondarynamenode-hadoop000.out [[email protected] hadoop-2.8.1]$ jps 3141 Jps 2806 DataNode 2665 NameNode 2990 SecondaryNameNode #至此发现HDFS三个进程都是以hadoop000启动,
以上是关于Hadoop用户重新部署HDFS的主要内容,如果未能解决你的问题,请参考以下文章