搭建Hadoop伪分布式集群
Posted chien-wong
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了搭建Hadoop伪分布式集群相关的知识,希望对你有一定的参考价值。
版本与环境
- 虚拟机:VMware Workstation Pro 15
- Linux镜像:Ubuntu-18.04.2-live-server-amd64.iso
- Java版本:jdk-8u231-linux-x64.tar.gz
- Hadoop版本:version-3.1.3
准备
- (PS:以下配置需在克隆slave之前完成)
- 安装Ubuntu(PS:记得安装OpenSSH)
- 解压hadoop和jdk:
tar -zxvf xxx.tar.gz
- 移动hadoop根目录:
mv hadoop-3.1.3 /usr/local/hadoop3
- 移动jdk根目录:
mv jdk-1.8.0_231 /usr/local/jdk1.8
添加环境变量
- 执行以下命令将环境变量写入
.bashrc
# cd ~
# vim .bashrc
- java variables
export JAVA_HOME=/usr/local/jdk1.8/
export JRE_HOME=$JAVA_HOME/jre
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$JAVA_HOME/bin:$PATH
- hadoop variables
export HADOOP_HOME=/usr/local/hadoop3
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
- 保存环境变量
# source .bashrc
配置Hadoop
- 进入目录:
cd /usr/local/hadoop3/etc/hadoop
- 配置文件hadoop-env.sh
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop3/tmp</value>
<description>文件临时存储目录</description>
</property>
<property>
<name>fs.defaultFS</name>
<!-- 1.x name>fs.default.name</name -->
<value>hdfs://master:9000</value>
<description>hdfs namenode访问地址</description>
</property>
<property>
<name>io.file.buffer.size</name>
<value>102400</value>
<description>文件块大小</description>
</property>
- 配置文件hdfs-site.xml
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>slave1:50080</value>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
<description>文件块的副本数</description>
</property> </property>
<property>
<name>dfs.name.dir</name>
<value>/usr/local/hadoop3/hdfs/name</value>
<description>namenode目录</description>
</property>
<property>
<name>dfs.data.dir</name>
<value>/usr/local/hadoop3/hdfs/data</value>
<description>datanode目录</description>
</property>
- 配置文件mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=/usr/local/hadoop3</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=/usr/local/hadoop3</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=/usr/local/hadoop3</value>
</property>
- 配置文件yarn-site.xml
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
克隆节点
- 完成以上配置后,便可以此为模板克隆多个节点
- 博主以两个slave节点为例
配置主机名与IP
- 分别修改主机名为:
master
、slave1
、slave2
# hostnamectl set-hostname xxx
- 若有
/etc/cloud/cloud.cfg
文件,则修改preserve_hostname
为true
- 分别修改静态IP:
192.168.127.134
、192.168.127.135
、192.168.127.136
# vim /etc/netplan/50-cloud-init.yaml
- 使IP配置生效:
# netplan apply
- 修改每个节点的静态DNS解析,例如:
# vim /etc/hosts
192.168.127.134 master
192.168.127.135 slave1
192.168.127.136 slave2
设置节点间免密登录
- master、slave1、slave2中输入:
ssh-keygen -t rsa -P ""
- 在master中将slave1、slave2的配置合成keys
# cd ~/.ssh
# scp -P 22 slave1:~/.ssh/id_rsa.pub id_rsa.pub1
# scp -P 22 slave2:~/.ssh/id_rsa.pub id_rsa.pub2
# cat id_rsa.pub >> authorized_keys
# cat id_rsa.pub1 >> authorized_keys
# cat id_rsa.pub2 >> authorized_keys
- 将配置传给slave1、slave2
# scp -P 22 authorized_keys slave1:~/.ssh/
# scp -P 22 authorized_keys slave2:~/.ssh/
配置脚本文件
- 配置master节点即可
- 进入存放指令的目录:
cd /usr/local/hadoop3/sbin
- 修改
start-dfs.sh
、stop-dfs.sh
:
HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
- 修改
start-yarn.sh
、stop-yarn.sh
:
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
启动并验证
- 启动集群:
# /usr/local/hadoop3/sbin/start-all.sh
- 显示当前所有java进程:
jps
- 登录
master:8088
、master:9870
查看hadoop自带的web服务
运行测试用例
- 进入目录:
/usr/local/hadoop3
- 在HDFS中创建文件夹:
# hdfs dfs -mkdir -p /data/input
- 将任一txt文件放入:
# hdfs dfs -put README.txt /data/input
- 执行mapreduce测试用例:
# hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.1.3.jar wordcount /data/input /data/output/result
- 查看结果:
# hdfs dfs -cat /data/output/result/part-r-00000
以上是关于搭建Hadoop伪分布式集群的主要内容,如果未能解决你的问题,请参考以下文章