Hadoop大数据分析与挖掘实战----------P23~25

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Hadoop大数据分析与挖掘实战----------P23~25相关的知识,希望对你有一定的参考价值。

6.安装Hadoop

  1)在Hadoop网站下,下载稳定版的并且已经编译好的二进制包,并解压缩。

[[email protected] ~]$ wget http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
[[email protected] ~]$ tar -zxvf hadoop-2.7.3.tar.gz ~/opt
[[email protected] ~]$ ~/opt/hadoop-2.7.3

  2)设置环境变量:

[[email protected] hadoop-2.7.3]$ vim ~/.bashrc
# User specific aliases and functions
export HADOOP_PREFIX=$HOME/opt/hadoop-2.7.3
export HADOOP_COMMON_HOME=HADOOP_PREFIX
export HADOOP_HDFS_HOME=HADOOP_PREFIX
export HADOOP_MAPRED_HOME=HADOOP_PREFIX
export HADOOP_YARN_HOME=HADOOP_PREFIX
export HADOOP_CONF_DIR=HADOOP_PREFIX/etc/hadoop
export PATH=$PATH:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin

  3)修改配置文件(etc/hadoop/hadoop-env.sh),添加下面的命令(这里需要注意JAVA_HOME的设置需要根据自己机器的实际情况进行设置):

##把JAVA_HOME后的内容修改成本机设定的JAVA_HOME

# I added
export JAVA_HOME=/usr/lib/jvm/java

##我修改为

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.111-2.b15.el7_3.x86_64

  4) 修改配置文件(etc/hadoop/core-site.xml),内容如下:(注①)

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://192.168.0.131:9000</value>
    </property>
    <property>
                <name>hadoop.tmp.dir</name>
                <value>file:/home/hadoop/opt/var/hadoop/tmp/hadoop-$USER</value>
        </property>
</configuration>

  5) 修改配置文件(etc/hadoop/hdfs-site.xml),内容如下:

<configuration>
    <property>
        <name>dfs.datanode.data.dir</name>    
        <value>file:/home/hadoop/opt/var/hadoop/hdfs/datanode</value>
    </property>
    <property>
                <name>dfs.namenode.name.dir</name>
                <value>file:/home/hadoop/opt/var/hadoop/hdfs/namenode</value>
        </property>
    <property>
                <name>dfs.namenode.checkpoint.dir</name>
                <value>file:/home/hadoop/opt/var/hadoop/hdfs/namesecondary</value>
        </property>
    <property>
                <name>dfs.replication</name>
                <value>3</value>
        </property>
<!--        <property>
                <name>dfs.datanode.max.xcievers</name>
                <value>2</value>
        </property>-->
</configuration>

  6) 修改配置文件(etc/hadoop/yarn-site.xml),内容如下:

<configuration>
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
        <property>
                <name>yarn.resourcemanager.hostname</name>
                <value>master</value>
        </property>
        <property>                                                                  
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>  
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>  
        </property>  
</configuration>

  7) 首先,复制etc/hadoop/mapred-site.xml.template为etc/hadoop/mapred-site.xml

[[email protected] hadoop]$ cp mapred-site.xml.template mapred-site.xml

  然后修改配置文件(etc/hadoop/mapred-site.xm),内容如下:

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
        <property>
                <name>mapreduce.jobtracker.staging.root.dir</name>
                <value>/home</value>
        </property>
</configuration>

  8)复制hadoop至slave1,slave2。(注②)

[[email protected] opt]$ scp -r /home/hadoop/opt/hadoop-2.7.3 [email protected]:/home/hadoop/opt/
[[email protected] opt]$ scp -r /home/hadoop/opt/hadoop-2.7.3 [email protected]:/home/hadoop/opt/

  9)格式化HDFS:

[[email protected] hadoop]$ hdfs namenod -format

  10)启动hadoop集群,启动结束后使用jps命令列出守护进程安装验证是否成功。(注③)

#在使用过程中,我发现每次都需要输入master的密码。于是又将公钥拷给了master一份之后,就不需要输入“三次”密码了!
[[email protected] ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub master
#启动hadoop集群
[[email protected] ~]$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to /home/hadoop/opt/hadoop-2.7.3/logs/hadoop-hadoop-namenode-master.out
slave1: starting datanode, logging to /home/hadoop/opt/hadoop-2.7.3/logs/hadoop-hadoop-datanode-slave1.out
slave2: starting datanode, logging to /home/hadoop/opt/hadoop-2.7.3/logs/hadoop-hadoop-datanode-slave2.out
master: starting datanode, logging to /home/hadoop/opt/hadoop-2.7.3/logs/hadoop-hadoop-datanode-master.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/opt/hadoop-2.7.3/logs/hadoop-hadoop-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/opt/hadoop-2.7.3/logs/yarn-hadoop-resourcemanager-master.out
slave1: starting nodemanager, logging to /home/hadoop/opt/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-slave1.out
slave2: starting nodemanager, logging to /home/hadoop/opt/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-slave2.out
master: starting nodemanager, logging to /home/hadoop/opt/hadoop-2.7.3/logs/yarn-hadoop-nodemanager-master.out
#master主节点
[[email protected] ~]$ jps
44469 DataNode
45256 Jps
44651 SecondaryNameNode
44811 ResourceManager
44939 NodeManager
44319 NameNode
#slave1节点
[[email protected] ~]$ jps
35973 NodeManager
35847 DataNode
36106 Jps
#slave2节点
[[email protected] ~]$ jps
36360 NodeManager
36234 DataNode
36493 Jps

注①:等同于一个选项,name中是选项名,value中是选项内容。中间的设置可以查阅hadoop2.7.3的官方文档http://hadoop.apache.org/docs/r2.7.3/ 我在其中翻阅了之后留下的这些配置不一定全部都是需要的,但是我已经配置了半个月了,心力憔悴之下我不想再试错了。如果有和我同样配置的朋友能把这份配置精简一下,万分感谢。

注②:在《Hadoop大数据分析与挖掘实战》这本书里没有这一项,虽然没有试错,但是我认为是必须的。

注③:书中分两步启动,start-dfs.sh及start-yarn.sh,但是启动之后书中P30页的实践内容做不了。后来经过排查,发现start-all.sh可以。应该是2.7.3中又添加了其他的启动项(?!),希望有大神可以指正。

做了半个月,内容自用,谢绝转载。


以上是关于Hadoop大数据分析与挖掘实战----------P23~25的主要内容,如果未能解决你的问题,请参考以下文章

hadoop大数据分析与挖掘实战(读书笔记1)

Hadoop大数据挖掘从入门到进阶实战

大数据挖掘和分析技术实战- Hadoop/Mahout/MLlib/Storm/Docker

Cloudera hadoop 大数据平台实战指南

(第9篇)大数据的的超级应用——数据挖掘-推荐系统

数据分析与挖掘工程师:这是为您量身定制的阅读书单,请收下