在Redhat8.0上安装Hadoop3.1.3单机版
Posted robinson1988
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了在Redhat8.0上安装Hadoop3.1.3单机版相关的知识,希望对你有一定的参考价值。
hadoop最新版本是3.2.1,安装之后SecondaryNameNode起不来 所以安装3.1.3
hadoop下载地址:http://hadoop.apache.org/releases.html
修改/etc/hosts,添加ip和hostname
vi /etc/hosts
192.168.56.10 server
注意:这里填写ip hostname 我的ip是 192.168.56.10 hostname 是 server
[root@server ~]# hostname
server
下载 Hadoop hadoop-3.1.3.tar.gz
下载 JDK jdk-8u241-linux-x64.tar.gz
关闭防火墙
systemctl status firewalld ---查看防火墙状态
systemctl stop firewalld ---关闭防火墙
systemctl disable firewalld ---永久禁止防火墙
将下好的JAVA JDK 和HADOOP上传到 /tmp
[root@server tmp]# pwd
/tmp
[root@server tmp]# ll
total 540784
-rw-r--r--. 1 root root 359196911 Mar 30 15:43 hadoop-3.1.3.tar.gz
-rw-r--r--. 1 root root 194545143 Mar 30 18:07 jdk-8u241-linux-x64.tar.gz
drwx------. 2 gdm gdm 4096 Mar 30 21:13 orbit-gdm
drwx------. 2 gdm gdm 4096 Mar 30 21:13 pulse-AMhZgn6W6wIL
-rw-------. 1 root root 0 Mar 30 18:47 yum.log
解压JDK
[root@server tmp]# tar -zxvf jdk-8u241-linux-x64.tar.gz
[root@server tmp]# ll
total 540784
-rw-r--r--. 1 root root 359196911 Mar 30 15:43 hadoop-3.1.3.tar.gz
drwxr-xr-x. 7 10143 10143 4096 Dec 11 18:39 jdk1.8.0_241
-rw-r--r--. 1 root root 194545143 Mar 30 18:07 jdk-8u241-linux-x64.tar.gz
drwx------. 2 gdm gdm 4096 Mar 30 21:13 orbit-gdm
drwx------. 2 gdm gdm 4096 Mar 30 21:13 pulse-AMhZgn6W6wIL
-rw-------. 1 root root 0 Mar 30 18:47 yum.log
将解压后的JDK文件夹移动到/usr/local
[root@server tmp]# cd /usr/local
[root@server local]# mv /tmp/jdk1.8.0_241 jdk1.8.0_241
配置JAVA环境变量
vim /etc/profile
export JAVA_HOME=/usr/local/jdk1.8.0_241
export PATH=$PATH:$JAVA_HOME/bin
使环境变量生效
source /etc/profile
检查JDK安装成功没有
[root@server ~]# java -version
java version "1.8.0_241"
Java(TM) SE Runtime Environment (build 1.8.0_241-b07)
Java HotSpot(TM) 64-Bit Server VM (build 25.241-b07, mixed mode)
配置ssh
ssh-keygen -t rsa ---执行之后一致敲回车
ssh-copy-id -i ~/.ssh/id_rsa.pub server ---执行之后要求输入root密码
解压hadoop包,并且将hadoop文件夹存放到/usr/local/
cd /tmp
tar -xzvf hadoop-3.1.3.tar.gz
mv hadoop-3.1.3 /usr/local/
配置hadoop环境变量
vim /etc/profile
export HADOOP_HOME=/usr/local/hadoop-3.1.3
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
使环境变量生效
source /etc/profile
配置hadoop-env.sh文件
cd /usr/local/hadoop-3.1.3/etc/hadoop
vi hadoop-env.sh
export JAVA_HOME=/usr/local/jdk1.8.0_241
export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root
export HADOOP_PID_DIR=/data/hadoop/pids
export HADOOP_LOG_DIR=/data/hadoop/logs
配置core-site.xml文件
cd /usr/local/hadoop-3.1.3/etc/hadoop
vi core-site.xml
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/hadoop-3.1.3/tmp</value>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://server:9000</value>
</property>
</configuration>
注意: <configuration> </configuration> 要删掉 不然后面格式化HDFS报错
<property>
<name>fs.defaultFS</name>
<value>hdfs://server:9000</value>
</property>
server是hostname主机名,如果你的hostname不是server就要填写为你自己的
配置hdfs-site.xml文件
cd /usr/local/hadoop-3.1.3/etc/hadoop
vi hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.name.dir</name>
<value>/usr/local/hadoop-3.1.3/hdfs/name</value>
</property>
<property>
<name>dfs.data.dir</name>
<value>/usr/local/hadoop-3.1.3/hdfs/data</value>
</property>
</configuration>
dfs.replication:数据块副本数,存放1份
dfs.name.dir:指定namenode节点的文件存储目录
dfs.data.dir:指定datanode节点的文件存储目录
注意: <configuration> </configuration> 要删掉 不然后面格式化HDFS报错
配置mapred-site.xml文件
vi mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.application.classpath</name>
<value>
/usr/local/hadoop-3.1.3/etc/hadoop,
/usr/local/hadoop-3.1.3/share/hadoop/common/*,
/usr/local/hadoop-3.1.3/share/hadoop/common/lib/*,
/usr/local/hadoop-3.1.3/share/hadoop/hdfs/*,
/usr/local/hadoop-3.1.3/share/hadoop/hdfs/lib/*,
/usr/local/hadoop-3.1.3/share/hadoop/mapreduce/*,
/usr/local/hadoop-3.1.3/share/hadoop/mapreduce/lib/*,
/usr/local/hadoop-3.1.3/share/hadoop/yarn/*,
/usr/local/hadoop-3.1.3/share/hadoop/yarn/lib/*
</value>
</property>
</configuration>
注意: <configuration> </configuration> 要删掉 不然后面格式化HDFS报错
配置yarn-site.xml文件
vi yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>server</value>
</property>
<property>
<name>yarn.nodemanager.env-whitelist</name>
<value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_HOME</value>
</property>
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>3</value>
</property>
</configuration>
注意: <configuration> </configuration> 要删掉 不然后面格式化HDFS报错
<property>
<name>yarn.resourcemanager.hostname</name>
<value>server</value>
</property>
server是hostname主机名,如果你的hostname不是server就要填写为你自己的
配置workers文件
vi workers
server
格式化HDFS文件系统
cd /usr/local/hadoop-3.1.3
bin/hadoop namenode -format
启动与关闭hadoop
cd /usr/local/hadoop-3.1.3
sbin/start-all.sh
sbin/stop-all.sh
检查hadoop是否安装成功
jps
hadoop dfsadmin -report
打开浏览器输入下面网址,检查HADOOP状态
192.168.56.10:9870
192.168.56.10:8088
以上是关于在Redhat8.0上安装Hadoop3.1.3单机版的主要内容,如果未能解决你的问题,请参考以下文章
解决 Hadoop3.1.3 SecondaryNamenode 页面不能显示完整信息