Linux实战——Hadoop安装部署

Posted 会不了一点

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Linux实战——Hadoop安装部署相关的知识,希望对你有一定的参考价值。

大数据集群(Hadoop生态)安装部署

简介

1)Hadoop是一个由Apache基金会所开发的分布式系统基础架构。

2)主要解决,海量数据的存储和海量数据的分析计算问题。

三类组件

  • Hadoop HDFS:提供分布式海量数据存储能力

  • Hadoop YARN:提供分布式集群资源管理能力

  • Hadoop MapReduce:提供分布式海量数据计算能力

前置要求

  • 请确保完成了集群化环境前置准备
  • 即:JDK、SSH免密、关闭防火墙、配置主机名映射等前置操作

JDK、防火墙配置

集群化环境前置准备、SSH免密、关闭防火墙、配置主机名映射

Hadoop集群角色

Hadoop生态体系中总共会出现如下进程角色:

  1. Hadoop HDFS的管理角色:Namenode进程(仅需1个即可(管理者一个就够)
  2. Hadoop HDFS的工作角色:Datanode进程(需要多个(工人,越多越好,一个机器启动一个)
  3. Hadoop YARN的管理角色:ResourceManager进程(仅需1个即可(管理者一个就够)
  4. Hadoop YARN的工作角色:NodeManager进程(需要多个(工人,越多越好,一个机器启动一个)
  5. Hadoop 历史记录服务器角色:HistoryServer进程(仅需1个即可(功能进程无需太多1个足够)
  6. Hadoop 代理服务器角色:WebProxyServer进程(仅需1个即可(功能进程无需太多1个足够)
  7. Zookeeper的进程:QuorumPeerMain进程(仅需1个即可(Zookeeper的工作者,越多越好)

角色和节点分配

角色分配如下:

  1. node1:Namenode、Datanode、ResourceManager、NodeManager、HistoryServer、WebProxyServer、QuorumPeerMain
  2. node2:Datanode、NodeManager、QuorumPeerMain
  3. node3:Datanode、NodeManager、QuorumPeerMain

安装

调整虚拟机内存

如上图,可以看出node1承载了太多的压力。同时node2和node3也同时运行了不少程序

为了确保集群的稳定,需要对虚拟机进行内存设置。

请在VMware中,对:

  1. node1设置4GB或以上内存
  2. node2和node3设置2GB或以上内存

大数据的软件本身就是集群化(一堆服务器)一起运行的。

现在我们在一台电脑中以多台虚拟机来模拟集群,确实会有很大的内存压力哦。

Zookeeper集群部署

Zookeeper集群部署

Hadoop集群部署

  1. 下载Hadoop安装包、解压、配置软链接

    # 1. 下载
    wget http://archive.apache.org/dist/hadoop/common/hadoop-3.3.0/hadoop-3.3.0.tar.gz
    
    # 2. 解压
    # 请确保目录/export/server存在
    tar -zxvf hadoop-3.3.0.tar.gz -C /export/server/
    
    # 3. 构建软链接
    ln -s /export/server/hadoop-3.3.0 /export/server/hadoop
    
  2. 修改配置文件:hadoop-env.sh

    Hadoop的配置文件要修改的地方很多,请细心

    cd 进入到/export/server/hadoop/etc/hadoop,文件夹中,配置文件都在这里

    修改hadoop-env.sh文件

    此文件是配置一些Hadoop用到的环境变量

    这些是临时变量,在Hadoop运行时有用

    如果要永久生效,需要写到/etc/profile中

    # 在文件开头加入:
    # 配置Java安装路径
    export JAVA_HOME=/export/server/jdk
    # 配置Hadoop安装路径
    export HADOOP_HOME=/export/server/hadoop
    # Hadoop hdfs配置文件路径
    export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
    # Hadoop YARN配置文件路径
    export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
    # Hadoop YARN 日志文件夹
    export YARN_LOG_DIR=$HADOOP_HOME/logs/yarn
    # Hadoop hdfs 日志文件夹
    export HADOOP_LOG_DIR=$HADOOP_HOME/logs/hdfs
    
    # Hadoop的使用启动用户配置
    export HDFS_NAMENODE_USER=root
    export HDFS_DATANODE_USER=root
    export HDFS_SECONDARYNAMENODE_USER=root
    export YARN_RESOURCEMANAGER_USER=root
    export YARN_NODEMANAGER_USER=root
    export YARN_PROXYSERVER_USER=root
    
  3. 修改配置文件:core-site.xml

    如下,清空文件,填入如下内容

    <?xml version="1.0" encoding="UTF-8"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!--
      Licensed under the Apache License, Version 2.0 (the "License");
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at
    
        http://www.apache.org/licenses/LICENSE-2.0
    
      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License. See accompanying LICENSE file.
    -->
    
    <!-- Put site-specific property overrides in this file. -->
    <configuration>
      <property>
        <name>fs.defaultFS</name>
        <value>hdfs://node1:8020</value>
        <description></description>
      </property>
    
      <property>
        <name>io.file.buffer.size</name>
        <value>131072</value>
        <description></description>
      </property>
    </configuration>
    
  4. 配置:hdfs-site.xml文件

    <?xml version="1.0" encoding="UTF-8"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!--
      Licensed under the Apache License, Version 2.0 (the "License");
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at
    
        http://www.apache.org/licenses/LICENSE-2.0
    
      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License. See accompanying LICENSE file.
    -->
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
        <property>
            <name>dfs.datanode.data.dir.perm</name>
            <value>700</value>
        </property>
    
      <property>
        <name>dfs.namenode.name.dir</name>
        <value>/data/nn</value>
        <description>Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently.</description>
      </property>
    
      <property>
        <name>dfs.namenode.hosts</name>
        <value>node1,node2,node3</value>
        <description>List of permitted DataNodes.</description>
      </property>
    
      <property>
        <name>dfs.blocksize</name>
        <value>268435456</value>
        <description></description>
      </property>
    
    
      <property>
        <name>dfs.namenode.handler.count</name>
        <value>100</value>
        <description></description>
      </property>
    
      <property>
        <name>dfs.datanode.data.dir</name>
        <value>/data/dn</value>
      </property>
    </configuration>
    
  5. 配置:mapred-env.sh文件

    # 在文件的开头加入如下环境变量设置
    export JAVA_HOME=/export/server/jdk
    export HADOOP_JOB_HISTORYSERVER_HEAPSIZE=1000
    export HADOOP_MAPRED_ROOT_LOGGER=INFO,RFA
    
  6. 配置:mapred-site.xml文件

    <?xml version="1.0"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    <!--
      Licensed under the Apache License, Version 2.0 (the "License");
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at
    
        http://www.apache.org/licenses/LICENSE-2.0
    
      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License. See accompanying LICENSE file.
    -->
    
    <!-- Put site-specific property overrides in this file. -->
    
    <configuration>
      <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
        <description></description>
      </property>
    
      <property>
        <name>mapreduce.jobhistory.address</name>
        <value>node1:10020</value>
        <description></description>
      </property>
    
    
      <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>node1:19888</value>
        <description></description>
      </property>
    
    
      <property>
        <name>mapreduce.jobhistory.intermediate-done-dir</name>
        <value>/data/mr-history/tmp</value>
        <description></description>
      </property>
    
    
      <property>
        <name>mapreduce.jobhistory.done-dir</name>
        <value>/data/mr-history/done</value>
        <description></description>
      </property>
    <property>
      <name>yarn.app.mapreduce.am.env</name>
      <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
    </property>
    <property>
      <name>mapreduce.map.env</name>
      <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
    </property>
    <property>
      <name>mapreduce.reduce.env</name>
      <value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
    </property>
    </configuration>
    
  7. 配置:yarn-env.sh文件

    # 在文件的开头加入如下环境变量设置
    export JAVA_HOME=/export/server/jdk
    export HADOOP_HOME=/export/server/hadoop
    export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
    export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
    export YARN_LOG_DIR=$HADOOP_HOME/logs/yarn
    export HADOOP_LOG_DIR=$HADOOP_HOME/logs/hdfs
    
  8. 配置:yarn-site.xml文件

    <?xml version="1.0"?>
    <!--
      Licensed under the Apache License, Version 2.0 (the "License");
      you may not use this file except in compliance with the License.
      You may obtain a copy of the License at
    
        http://www.apache.org/licenses/LICENSE-2.0
    
      Unless required by applicable law or agreed to in writing, software
      distributed under the License is distributed on an "AS IS" BASIS,
      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
      See the License for the specific language governing permissions and
      limitations under the License. See accompanying LICENSE file.
    -->
    <configuration>
    
    <!-- Site specific YARN configuration properties -->
    <property>
        <name>yarn.log.server.url</name>
        <value>http://node1:19888/jobhistory/logs</value>
        <description></description>
    </property>
    
      <property>
        <name>yarn.web-proxy.address</name>
        <value>node1:8089</value>
        <description>proxy server hostname and port</description>
      </property>
    
    
      <property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
        <description>Configuration to enable or disable log aggregation</description>
      </property>
    
      <property>
        <name>yarn.nodemanager.remote-app-log-dir</name>
        <value>/tmp/logs</value>
        <description>Configuration to enable or disable log aggregation</description>
      </property>
    
    
    <!-- Site specific YARN configuration properties -->
      <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>node1</value>
        <description></description>
      </property>
    
      <property>
        <name>yarn.resourcemanager.scheduler.class</name>
        <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value>
        <description></description>
      </property>
    
      <property>
        <name>yarn.nodemanager.local-dirs</name>
        <value>/data/nm-local</value>
        <description>Comma-separated list of paths on the local filesystem where intermediate data is written.</description>
      </property>
    
    
      <property>
        <name>yarn.nodemanager.log-dirs</name>
        <value>/data/nm-log</value>
        <description>Comma-separated list of paths on the local filesystem where logs are written.</description>
      </property>
    
    
      <property>
        <name>yarn.nodemanager.log.retain-seconds</name>
        <value>10800</value>
        <description>Default time (in seconds) to retain log files on the NodeManager Only applicable if log-aggregation is disabled.</description>
      </property>
    
    
    
      <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
        <description>Shuffle service that needs to be set for Map Reduce applications.</description>
      </property>
    </configuration>
    
  9. 修改workers文件

    # 全部内容如下
    node1
    node2
    node3
    
  10. 分发hadoop到其它机器

# 在node1执行
cd /export/server

scp -r hadoop-3.3.0 node2:`pwd`/
scp -r hadoop-3.3.0 node3:`pwd`/
  1. 在node2、node3执行

    # 创建软链接
    ln -s /export/server/hadoop-3.3.0 /export/server/hadoop
    
  2. 创建所需目录

    • 在node1执行:

      mkdir -p /data/nn
      mkdir -p /data/dn
      mkdir -p /data/nm-log
      mkdir -p /data/nm-local
      
    • 在node2执行:

      mkdir -p /data/dn
      mkdir -p /data/nm-log
      mkdir -p /data/nm-local
      
    • 在node3执行:

      mkdir -p /data/dn
      mkdir -p /data/nm-log
      mkdir -p /data/nm-local
      
  3. 配置环境变量

    在node1、node2、node3修改/etc/profile

    export HADOOP_HOME=/export/server/hadoop
    export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
    

    执行source /etc/profile生效

  4. 格式化NameNode,在node1执行

    hadoop namenode -format
    

    hadoop这个命令来自于:$HADOOP_HOME/bin中的程序

    由于配置了环境变量PATH,所以可以在任意位置执行hadoop命令哦

  5. 启动hadoop的hdfs集群,在node1执行即可

    start-dfs.sh
    
    # 如需停止可以执行
    stop-dfs.sh
    

    start-dfs.sh这个命令来自于:$HADOOP_HOME/sbin中的程序

    由于配置了环境变量PATH,所以可以在任意位置执行start-dfs.sh命令哦

  6. 启动hadoop的yarn集群,在node1执行即可

    start-yarn.sh
    
    # 如需停止可以执行
    stop-yarn.sh
    
  7. 启动历史服务器

    mapred --daemon start historyserver
    
    # 如需停止将start更换为stop
    
  8. 启动web代理服务器

    yarn-daemon.sh start proxyserver
    
    # 如需停止将start更换为stop
    

验证Hadoop集群运行情况

  1. 在node1、node2、node3上通过jps验证进程是否都启动成功

  2. 验证HDFS,浏览器打开:http://node1:9870

    创建文件test.txt,随意填入内容,并执行:

    hadoop fs -put test.txt /test.txt
    
    hadoop fs -cat /test.txt
    
  3. 验证YARN,浏览器打开:http://node1:8088

    执行:

    # 创建文件words.txt,填入如下内容
    itheima itcast hadoop
    itheima hadoop hadoop
    itheima itcast
    
    # 将文件上传到HDFS中
    hadoop fs -put words.txt /words.txt
    
    # 执行如下命令验证YARN是否正常
    hadoop jar /export/server/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.0.jar wordcount -Dmapred.job.queue.name=root.root /words.txt /output
    

Hadoop最简单入门实战

一、部署Hadoop本地模式

  1. 搭建linux环境
    我用的centos7
  2. 在/opt目录下创建目录
mkdir module
  1. 安装jdk
  2. 下载hadoop https://hadoop.apache.org/releases.html 并解压到/opt/module目录
  3. 配置hadoop环境变量

vi /etc/profile

JAVA_HOME=/usr/local/jdk1.8.0_151
HADOOP_HOME=/opt/module/hadoop-2.10.0
CLASSPATH=.:$JAVA_HOME/lib.tools.jar
PATH=$JAVA_HOME/bin:$PATH:$HADOOP_HOME/bin
export JAVA_HOME CLASSPATH PATH

配置完毕,刷新

source /etc/profile

这就安装完毕了,简单吧。。。

二、运行Demo

  1. 建立一个测试用的输入文件

echo \'hadoop mapreduce hivehbase spark stormsqoop hadoop hivespark\' > data/wc.input

  1. 运行命令
    官方提供的计算单词数量的程序

bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.10.0.jar wordcount ../data/wc.input output

  1. 运行完成后,会创建一个output目录,里面中有 _SUCCESS 文件说明 JOB 运行成功,part-r-00000 是输出结果文件。结果示例如下:

三、伪分布式部署

进入hadoop目录

cd /opt/module/hadoop-2.10.0/etc/hadoop

  1. 配置hadoop-env.sh
export JAVA_HOME=/usr/local/jdk1.8.0_151
  1. 配置core-site.xml
<configuration>
   <!-- 指定HDFS中namenode的路径  -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://eshop01:9000</value>
    </property>
   <!-- 指定HDFS运行时产生的文件的存储目录  -->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/module/hadoop-2.10.0/data/tmp</value>
    </property>

</configuration>

  1. 配置hdfs-site.xml
<property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
  1. 启动集群
  • 格式化NameNode(第一次启动格式化)

bin/hdfs namenode -format

  • 启动namenode

sbin/hadoop-daemon.sh start namenode

  • 启动datanode

sbin/hadoop-daemon.sh start datanode

四、HDFS操作

  1. hdfs创建目录

bin/hdfs dfs -mkdir -p /usr/mmc

  1. 上传本地文件到hdfs

bin/hdfs dfs -put /opt/module/data/wc.input /usr/mmc

  1. 删除文件

bin/hdfs dfs -rm -r /usr/mmc

网页上查看效果:

五、启动YARN

  1. 配置yarn-env.sh

export JAVA_HOME=/usr/local/jdk1.8.0_151

  1. 配置yarn-site.xml
<configuration>

<!-- Site specific YARN configuration properties -->
<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
</property>
<property>
        <name>yarn.resourcemanager.hostname</name>
        <value>hadoop101</value>
</property>

</configuration>


hadoop101那里要配置为你虚拟机的hostname

  1. 配置mapred-env.sh

export JAVA_HOME=/usr/local/jdk1.8.0_151

  1. 配置mapred-site.xml(由mapred-site.xml.template重命名得到)

mv mapred-site.xml.template mapred-site.xml

<configuration>
        <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
        </property>
</configuration>

  1. 启动yarn
 sbin/yarn-daemon.sh start resourcemanager
 sbin/yarn-daemon.sh start nodemanager
  1. 运行mapreduce程序
  • 先传一个文件到hdfs上
hdfs dfs -mkdir -p /usr/mmc/input
hdfs dfs -put ../data/wc.input /usr/mmc/input
  • 运行程序

注意:运行之前用jps查看下,这些都启动没有NameNode、NodeManager 、DataNode、ResourceManager

hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.10.0.jar wordcount /usr/mmc/input /usr/mmc/output
  • 查看运行进度

http://192.168.1.21:8088/cluster

此时可以看到执行的进度了,但是那个History链接还是点不动,需要启动历史服务器

  1. 配置历史服务器
  • 打开mapred-site.xml
<configuration>
        <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
        </property>
        <property>
        <name>mapreduce.jobhistory.address</name>
        <value>eshop01:10020</value>
        </property>
        <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>eshop01:19888</value>
        </property>
</configuration>

  • 启动

sbin/mr-jobhistory-daemon.sh start historyserver

六、日志聚集

注意:开启日志聚集需要重启Nodemanager,resourcemanager,historymanager

  1. 配置yarn-site.xml,增加如下配置
<!--开启日志聚集功能  -->
<property>
        <name>yarn.log-aggregation-enable</name>
        <value>true</value>
</property>
<!-- 日志保留时间  -->
<property>
        <name>yarn.log-aggregation.retain-seconds</name>
        <value>604800</value>
</property>

  1. 启动Nodemanager,resourcemanager,historymanager

  2. 运行实例程序

hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.10.0.jar wordcount /usr/mmc/input /usr/mmc/output
  1. 查看log
    http://192.168.1.21:19888/jobhistory 点击指定job进去,点log

以上是关于Linux实战——Hadoop安装部署的主要内容,如果未能解决你的问题,请参考以下文章

Hadoop最简单入门实战

Linux实战——Zookeeper集群安装部署

Hadoop集群部署实战

Spark入门实战系列--2.Spark编译与部署(中)--Hadoop编译安装

Spark入门实战系列--2.Spark编译与部署(中)--Hadoop编译安装

Spark入门实战系列--2.Spark编译与部署(中)--Hadoop编译安装