install hadoop on xubuntu

Posted Pentium.Labs

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了install hadoop on xubuntu相关的知识,希望对你有一定的参考价值。

0. install xubuntu

we recommend to set username as "hadoop"

after installation, set user "hadoop" as administrator

sudo addgroup hadoop  
sudo adduser --ingroup hadoop hadoop 

打开/etc/sudoers文件

sudo gedit /etc/sudoers  

在root  ALL=(ALL:ALL)  ALL下添加hadoop  ALL=(ALL:ALL)  ALL

 

1. install java

1.解压java压缩包到usr/java(新建的文件夹)中。解压后就可使用

2.配置环境变量。如下  
在etc/profile 文件中。在最后添加如下内容    
#set java environment  
export JAVA_HOME=/usr/java/jdk1.7.0_67  
export JRE_HOME=/usr/java/jdk1.7.0_67/jre    
export PATH=$PATH:/usr/java/jdk1.7.0_67/bin    
export CLASSPATH=./:/usr/java/jdk1.7.0_67/lib:/usr/java/jdk1.7.0_67/jre/lib    
  
3.配置立即生效命令  
  source /etc/profile  

4.检测是否配置成功  
  java -version  
  
如果不行,重启linux  

 

2. configure login in ssh without entering password

please operate under user "hadoop"

su - hadoop  
sudo apt-get install openssh-server 
sudo /etc/init.d/ssh start  

cd ~/.ssh
ssh-keygen -t rsa -P ""  
cat ~/.ssh/id_rsa.pub>> ~/.ssh/authorized_keys  

 

3. install hadoop

1. unzip hadoop.tar.gz into /usr/hadoop
  then, ensure user "hadoop" owns /usr/hadoop
  
sudo chown -R hadoop:hadoop hadoop  
2. edit environment
    2.1 gedit /etc/profile    append these: 

export JAVA_HOME=/usr/java/  
export JRE_HOME=/usr/java/jre    
export HADOOP_INSTALL=/usr/hadoop
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin
export CLASSPATH=./:/usr/java/lib:/usr/java/jre/lib

    2.2 gedit /usr/hadoop/conf/hadoop-env.sh    append these:

# The java implementation to use.  Required.
export JAVA_HOME=/usr/java  
export HADOOP_INSTALL=/usr/hadoop
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin

3. restart linux

 

4. test

hadoop@ms:~$ 
hadoop@ms:~$ java -version
java version "1.7.0_79"
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
hadoop@ms:~$ hadoop version
Hadoop 1.2.1
Subversion https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152
Compiled by mattf on Mon Jul 22 15:23:09 PDT 2013
From source with checksum 6923c86528809c4e7e6f493b6b413a9a
This command was run using /usr/hadoop/hadoop-core-1.2.1.jar
hadoop@ms:~$ 

 

5. hadoop 伪分布式

编辑三个文件:
1). core-site.xml:

<configuration>  
    <property>  
        <name>fs.default.name</name>  
        <value>hdfs://localhost:9000</value>  
    </property>  
    <property>  
        <name>hadoop.tmp.dir</name>  
        <value>/usr/local/hadoop/tmp</value>  
    </property>  
</configuration>  


2).hdfs-site.xml:

<configuration>  
    <property>  
        <name>dfs.replication</name>  
        <value>2</value>  
    </property>  
    <property>  
        <name>dfs.name.dir</name>  
        <value>/usr/local/hadoop/datalog1,/usr/local/hadoop/datalog2</value>  
    </property>  
    <property>  
        <name>dfs.data.dir</name>  
        <value>/usr/local/hadoop/data1,/usr/local/hadoop/data2</value>  
    </property>  
</configuration>  

3). mapred-site.xml:

<configuration>     
    <property>    
        <name>mapred.job.tracker</name>  
        <value>localhost:9001</value>     
    </property>  
</configuration>  


2. 启动Hadoop到相关服务,格式化namenode, secondarynamenode, tasktracker:
hadoop@derekUbun:/usr/local/hadoop$ source /usr/local/hadoop/conf/hadoop-env.sh   
hadoop@derekUbun:/usr/local/hadoop$ hadoop namenode -format  
View Code

 

6*. install hbase[伪分布式]

1. unzip hbase.tar.gz into /usr/hbase
  then, ensure user "hadoop" owns /usr/hbase
  
sudo chown -R hadoop:hadoop hbase  

2. edit environment
    2.1 gedit /etc/profile    append these: 

export HBASE_HOME="/usr/hbase"
export PATH=$HBASE_HOME/bin:$PATH

    2.2 gedit /usr/hbase/conf/hbase-site.xml    append these:

<property>
     <name>hbase.rootdir</name>
     <!-- 对应hadoop中hdfs的配置项 -->
     <value>hdfs://localhost:9000/hbase</value>
 </property>
 <property>
     <name>hbase.cluster.distributed</name>
     <value>true</value>
</property>
<property>
     <name>hbase.master.info.port</name>
     <value>60010</value>
</property>
  
    2.3 gedit /usr/hbase/hbase-env.sh    modify these:

# The java implementation to use.  Java 1.6 required.
export JAVA_HOME=/usr/java/

# Extra Java CLASSPATH elements.  Optional.
export HBASE_CLASSPATH=/usr/hadoop/conf

# Tell HBase whether it should manage it\'s own instance of Zookeeper or not.
export HBASE_MANAGES_ZK=true

3. restart linux
    

 

 

 

#. references

http://blog.csdn.net/zhaoyl03/article/details/8657104#

http://www.tuicool.com/articles/VZn6zi

http://blog.csdn.net/zhaoyl03/article/details/8657104#

http://blog.csdn.net/pdw2009/article/details/21261417

http://www.th7.cn/db/nosql/201510/134214.shtml

 

以上是关于install hadoop on xubuntu的主要内容,如果未能解决你的问题,请参考以下文章

setup passphaseless ssh before installing hadoop on ubuntu

Install hadoop3.0 on multiple nodes

How to install Hadoop 2.7.3 cluster on CentOS 7.3

Step by step install and run Hadoop 2.9.1 on Windows 10 64 bit (最全步骤整理)

install spark in Windows

Jenkins Installing and migration