hadoop

Posted heping1314

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了hadoop相关的知识,希望对你有一定的参考价值。

1.前提环境关掉防火墙

2.绑定/etc/hosts

3.部署ansible

(git clone [email protected]:heheping0312/ansible.git)

yum  -y install ansible

[[email protected] test]# pwd
/root/test

[[email protected] test]# cat ansible.cfg
[defaults]
inventory = myhosts
host_key_checking = False

[[email protected] test]# cat myhosts
[app]
nn01
node1
node2
node3


[app:vars]
ansible_ssh_user="root"
ansible_ssh_pass="1"

 

[[email protected] test]# ansible app -m ping

node2 | SUCCESS =>
"changed": false,
"ping": "pong"

nn01 | SUCCESS =>
"changed": false,
"ping": "pong"

node1 | SUCCESS =>
"changed": false,
"ping": "pong"

node3 | SUCCESS =>
"changed": false,
"ping": "pong"

4.部署hadoop集群,nn01上搞

1)部署java环境

ansible app -m command -a "yum -y install java-1.8.0-openjdk"

2)无密码验证

[[email protected] test]# ansible app -m copy -a "src=/root/.ssh/id_rsa.pub dest=/root/.ssh/authorized_keys mode=600"(可多执行几次)

anisble app -m command -a "tar -xf hadoop-2.7.6.tar.gz -C /usr/local/"

3)rpm -ql  java-1.8.0-openjdk  查询java安装的路径

4).修改配置文件 slaves hadoop-env.sh core-site.xml hdfs-site.xml,相关资料都在github上

5).ansible app -m copy -a "src=/usr/local/hadoop-2.7.6/etc/hadoop/slaves dest=/usr/local/hadoop-2.7.6/etc/hadoop/"

6).ansible app -m copy -a "src=/usr/local/hadoop-2.7.6/etc/hadoop/core-site.xml dest=/usr/local/hadoop-2.7.6/etc/hadoop/"

7). ansible app -m copy -a "src=/usr/local/hadoop-2.7.6/etc/hadoop/hdfs-site.xml dest=/usr/local/hadoop-2.7.6/etc/hadoop/"

8).创建文件夹/var/hadoop
ssh web1 /var/hadoop; ssh web2 /var/hadoop; ssh web3 /var/hadoop
ansible app -m command -a "mkdir -p /var/hadoop"

9)格式化namenode节点
/usr/local/hadoop-2.7.6/bin/hdfs namenode -format

10.启动集群
/usr/local/hadoop-2.7.6/sbin/start-dfs.sh

11.验证集群是否创建成功
/usr/local/hadoop-2.7.6/bin/hdfs dfsadmin -report

####【[[email protected] ansible]# /usr/local/hadoop-2.7.6/bin/hdfs dfsadmin -report
Configured Capacity: 126421204992 (117.74 GB)
Present Capacity: 98187337728 (91.44 GB)
DFS Remaining: 98187251712 (91.44 GB)
DFS Used: 86016 (84 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (3:) 三个节点创建成功,ok

【相关配置文件git clone [email protected]:heheping0312/hadoop1.git】

以上是关于hadoop的主要内容,如果未能解决你的问题,请参考以下文章

hadoophadoop配置

HadoopHadoop mr wordcount基础

HadoopHadoop2.8编译

HadoopHadoop MR 自定义排序

HadoopHadoop概述

hadoophadoop 安装 kerberos