Apache Hadoop 集群安装文档

Posted 小怪兽的技术博客

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Apache Hadoop 集群安装文档相关的知识,希望对你有一定的参考价值。


简介:

Apache Hadoop 集群安装文档

软件:jdk-8u111-linux-x64.rpm、hadoop-2.8.0.tar.gz

http://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.8.0/hadoop-2.8.0.tar.gz

系统:CentOS 6.8 x64

主机列表及配置信息:

                    master.hadoop     datanode[01:03].hadoop

  CPU:                  8                     4

  MEM:                  16G                    8G

  DISK:               100G*2                100G*2

一、系统初始化

# master.hadoop

shell > vim /etc/hosts

192.168.1.25  master.hadoop
192.168.1.27  datanode01.hadoop
192.168.1.28  datanode02.hadoop
192.168.1.29  datanode03.hadoop

shell > yum -y install epel-release
shell > yum -y install ansible

shell > ssh-keygen  # 生成密钥
shell > ssh-copy-id -i ~/.ssh/id_rsa.pub "-p 22 [email protected]"
shell > ssh-copy-id -i ~/.ssh/id_rsa.pub "-p 22 [email protected]"
shell > ssh-copy-id -i ~/.ssh/id_rsa.pub "-p 22 [email protected]"

shell > vim /etc/ansible/hosts

# datanode.hadoop

[datanode]

datanode[01:03].hadoop

shell > ansible datanode -m shell -a useradd hadoop && echo hadoop | passwd --stdin hadoop

shell > ansible datanode -m shell -a "echo ‘* - nofile 65536‘ >> /etc/security/limits.conf"

shell > ansible datanode -m copy -a src=/etc/hosts dest=/etc/hosts  # 同步 hosts

shell > ansible datanode -m shell -a /etc/init.d/iptables stop && chkconfig --del iptables  # 关闭防火墙

shell > ansible datanode -m shell -a sed -i /SELINUX/s/enforcing/disabled/ /etc/selinux/config  # 关闭 SELinux

shell > ansible datanode -m shell -a echo vm.swappiness = 0 >> /etc/sysctl.conf  # 修改内核参数

shell > ansible datanode -m shell -a echo echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag  # 关闭透明大页

shell > ansible datanode -m shell -a echo echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag >> /etc/rc.local

shell > ansible datanode -m shell -a reboot

# 上面的 ansible 操作,master.hadoop 也要执行

二、时间同步

# master.hadoop

shell > /bin/cp -f /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

shell > yum -y install ntp

shell > ntpdate us.pool.ntp.org | hwclock -w

shell > vim /etc/ntp.conf
# 允许时间同步客户端
restrict 192.168.1.0 mask 255.255.255.0 nomodify
# Server 向谁同步时间
server us.pool.ntp.org prefer
# Server 无法向时间服务器同步时,使用本地时钟
server 127.127.1.0
fudge 127.127.1.0 stratum 10

shell > /etc/init.d/ntpd start && chkconfig --level 35 ntpd on

shell > ansible datanode -m shell -a yum -y install ntpdate

shell > ansible datanode -m shell -a /bin/cp -f /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

shell > ansible datanode -m shell -a ntpdate master.hadoop | hwclock -w

shell > ansible datanode -m cron -a "name=‘ntpdate master.hadoop‘ minute=0 hour=0 job=‘/usr/sbin/ntpdate master.hadoop | hwclock -w > /dev/null‘"

三、集群部署

# master.hadoop

1、安装 jdk、下载、解压 apache hadoop、设置主机间 hadoop 用户无密码登录

shell > rpm -ivh /usr/local/src/jdk-8u111-linux-x64.rpm

shell > echo export JAVA_HOME=/usr/java/default >> /etc/profile && source /etc/profile

shell > tar zxf /usr/local/src/hadoop-2.8.0.tar.gz -C /usr/local/

shell > chown -R hadoop.hadoop /usr/local/hadoop-2.8.0

shell > su - hadoop

hadoop shell > ssh-keygen

hadoop shell > cat .ssh/id_rsa.pub > .ssh/authorized_keys && chmod 600 .ssh/authorized_keys

hadoop shell > ssh-copy-id -i ~/.ssh/id_rsa.pub "-p 22 [email protected]"
hadoop shell > ssh-copy-id -i ~/.ssh/id_rsa.pub "-p 22 [email protected]"
hadoop shell > ssh-copy-id -i ~/.ssh/id_rsa.pub "-p 22 [email protected]"

hadoop shell > echo export PATH=$PATH:/usr/local/hadoop-2.8.0/bin:/usr/local/hadoop-2.8.0/sbin >> .bashrc && source .bashrc

2、配置 apache hadoop

# 指定 Slave、即 DataNode、NodeManager 角色

hadoop shell > vim /usr/local/hadoop-2.8.0/etc/hadoop/slaves
datanode01.hadoop
datanode02.hadoop
datanode03.hadoop

# 修改 hadoop-env.sh

hadoop shell > vim /usr/local/hadoop-2.8.0/etc/hadoop/hadoop-env.sh

export JAVA_HOME=/usr/java/default

# 修改 core-site.xml

hadoop shell > vim /usr/local/hadoop-2.8.0/etc/hadoop/core-site.xml

<configuration>

    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master.hadoop:9000</value>
    </property>

    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:///data/hadoop/tmp</value>
    </property>

    <property>
        <name>fs.trash.interval</name>
        <value>1440</value>
    </property>

</configuration>

# hadoop 核心配置文件
# 默认加载项 HADOOP_HOME/share/doc/hadoop/hadoop-project-dist/hadoop-common/core-default.xml

# fs.defaultFS       NameNode 地址,老版本为 fs.default.name
# hadoop.tmp.dir     hadoop 临时目录,很多目录不明确配置时,都基于该目录 ( 默认 /tmp,系统重启时会被删除 ),很重要!
# fs.trash.interval  开启垃圾回收,1440 分钟,默认 0 关闭 ( 用户文件系统级删除的数据会被移到回收站,24小时后被删除 )

# 修改 hdfs-site.xml

hadoop shell > vim /usr/local/hadoop-2.8.0/etc/hadoop/hdfs-site.xml

<configuration>

    <property>
        <name>dfs.blocksize</name>
        <value>134217728</value>
    </property>

    <property>
        <name>dfs.replication</name>
        <value>3</value>
    </property>

    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:///data/dfs/name</value>
    </property>

    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:///data/dfs/datanode</value>
    </property>

    <property>
        <name>dfs.namenode.checkpoint.dir</name>
        <value>file:///data/dfs/namesecondary</value>
    </property>

    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>master.hadoop:50090</value>
    </property>

</configuration>

# HDFS 配置文件
# 默认加载项 HADOOP_HOME/share/doc/hadoop/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml

# dfs.namenode.handler.count     NameNode 线程数,用于跟 DataNode 通信,默认 10,增大该参数可以优化性能,但是资源也相应提升
# dfs.hosts / dfs.hosts.exclude  允许或排除某些 DataNode 连接 NameNode

# dfs.blocksize           块大小,默认 134217728 ( 128M )
# dfs.replication         默认副本数,数据冗余
# dfs.namenode.name.dir   NameNode 元数据存放位置,可以配置多个目录,以 , 分割,用作数据冗余!
# dfs.datanode.data.dir   DataNode   数据存放位置,可以配置多个目录,以 , 分割,数据轮询写入,增加写入速度 ( 多个目录应该对应多个设备 DISK )

# 修改 mapred-site.xml

hadoop shell > cat /usr/local/hadoop-2.8.0/etc/hadoop/mapred-site.xml.template > /usr/local/hadoop-2.8.0/etc/hadoop/mapred-site.xml
hadoop shell > vim /usr/local/hadoop-2.8.0/etc/hadoop/mapred-site.xml

<configuration>

    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>

    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>master.hadoop:10020</value>
    </property>

    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>master.hadoop:19888</value>
    </property>

</configuration>

# MAPREDUCE 配置文件
# 默认加载项 HADOOP_HOME/share/doc/hadoop/hadoop-mapreduce-client/hadoop-mapreduce-client-core/mapred-default.xml

# mapreduce.framework.name  使用 yarn 来管理资源

# 修改 yarn-site.xml

hadoop shell > vim /usr/local/hadoop-2.8.0/etc/hadoop/yarn-site.xml

<configuration>

    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>master.hadoop</value>
    </property>

    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>

</configuration>

# YARN 配置文件
# 默认加载项 HADOOP_HOME/share/doc/hadoop/hadoop-yarn/hadoop-yarn-common/yarn-default.xml

# yarn.nodemanager.aux-services
# yarn.nodemanager.aux-services.mapreduce_shuffle.class

# yarn.resourcemanager.hostname         ReSourceManager 地址,默认端口 8032
# yarn.resourcemanager.scheduler.class  资源调度算法,CapacityScheduler 计算能力调度、FairScheduler 公平调度、Fifo Scheduler 先进先出调度

hadoop shell > exit

3、部署 Slave

shell > ansible datanode -m copy -a src=/usr/local/src/jdk-8u111-linux-x64.rpm dest=/usr/local/src/

shell > yum -y install rsync

shell > ansible datanode -m shell -a yum -y install rsync

shell > ansible datanode -m synchronize -a src=/usr/local/hadoop-2.8.0 dest=/usr/local/

# 我还傻傻的用 copy 模块,结果慢的要死,synchroize 为 rsync 模块,好快!

shell > ansible datanode -m shell -a rpm -ivh /usr/local/src/jdk-8u111-linux-x64.rpm

shell > ansible datanode -m shell -a "echo ‘export JAVA_HOME=/usr/java/default‘ >> /etc/profile && source /etc/profile"

四、启动集群

# master.hadoop

shell > chmod -R a+w /data
shell > ansible datanode -m shell -a chmod -R a+w /data

# 需要给 /data 目录写入权限,否则无法初始化文件系统 hdfs namenode -format

shell > su - hadoop

hadoop shell > hdfs namenode -format  # 初次启动需要格式化文件系统

hadoop shell > start-all.sh  # 启动所有服务 / stop-all.sh 关闭服务

hadoop shell > jps
4386 ResourceManager
4659 Jps
3990 NameNode
4204 SecondaryNameNode

# 这是 master.hadoop 启动的角色
# http://192.168.1.25:50070 # NameNode
# http://192.168.1.25:8088 # ReSourceManager

# datanode.hadoop

hadoop shell > jps
2508 Jps
2238 DataNode
2351 NodeManager

# 这是 datanode.hadoop 启动的角色

hadoop shell > hdfs dfs -ls
ls: `.: No such file or directory

hadoop shell > hdfs dfs -mkdir /user
hadoop shell > hdfs dfs -mkdir /user/hadoop

hadoop shell > hdfs dfs -ls
drwx------   - hadoop supergroup          0 2017-04-11 19:21 .Trash

五、运行示例

# master.hadoop

hadoop shell > hdfs dfs -put shakespeare.txt  # 上传本地文件到 hdfs
hadoop shell > hadoop jar /usr/local/hadoop-2.8.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.0.jar grep shakespeare.txt outfile what

# 执行官方示例,词频统计 ( 统计 what 出现次数 )

hadoop shell > hdfs dfs -ls
drwx------   - hadoop supergroup          0 2017-04-11 19:21 .Trash
drwxr-xr-x   - hadoop supergroup          0 2017-04-11 19:38 outfile
-rw-r--r--   3 hadoop supergroup    5447165 2017-04-11 19:35 shakespeare.txt

hadoop shell > hdfs dfs -cat outfile/*
2309    what

报错管理:

1、bin/hdfs namenode -format # 初始化文件系统报错

17/04/01 19:04:29 ERROR namenode.NameNode: Failed to start namenode.
java.io.IOException: Cannot create directory /data/dfs/namenode/current
    at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.clearDirectory(Storage.java:352)
    at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:573)
    at org.apache.hadoop.hdfs.server.namenode.NNStorage.format(NNStorage.java:594)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.format(FSImage.java:156)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1102)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1544)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1671)

# 解决方法

shell > chmod -R a+w /data
shell > ansible datanode -m shell -a chmod -R a+w /data









以上是关于Apache Hadoop 集群安装文档的主要内容,如果未能解决你的问题,请参考以下文章

SPARK安装二:HADOOP集群部署

搭建Hadoop集群Tips

Apache Hadoop集群离线安装部署——Hadoop(HDFSYARNMR)安装

Spark安装

Apache Hadoop集群离线安装部署——Hbase安装

hadoop下安装hbase