hive环境搭建

Posted kisf

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了hive环境搭建相关的知识,希望对你有一定的参考价值。

机器规划:

 

主机 ip 进程
master1 10.112.29.9 hive server
master2 10.112.29.10 hive client

mysql安装:略

添加hive用户名,及数据库。mysql -uhive -h10.112.28.179 -phive123456

hive使用2.3.0版本:

wget http://mirror.bit.edu.cn/apache/hive/hive-2.3.0/apache-hive-2.3.0-bin.tar.gz

添加环境变量:

export HIVE_HOME=/letv/soft/apache-hive-2.3.0-bin
export HIVE_CONF_DIR=$HIVE_HOME/conf
export PATH=\$PATH:\$HIVE_HOME/bin

 

同步至master2,并 source /etc/profile

 

解压:  

tar zxvf apache-hive-2.3.0-bin.tar.gz

  

生成keytab:

addprinc -randkey hive/[email protected]
addprinc -randkey hive/[email protected]

xst -k /var/kerberos/krb5kdc/keytab/hive.keytab hive/[email protected]
xst -k /var/kerberos/krb5kdc/keytab/hive.keytab hive/[email protected]

  

拷贝至master2

scp /var/kerberos/krb5kdc/keytab/hive.keytab master2:/var/kerberos/krb5kdc/keytab/

  

增加hive-site.xml:

vim conf/hive-site.xml 

<configuration>
    <property>
            <name>javax.jdo.option.ConnectionURL</name>
            <value>jdbc:mysql://10.112.28.179:3306/hive?createDatabaseIfNotExist=true</value>
            <description>JDBC connect string for a JDBC metastore</description>
    </property>
    <property>
            <name>javax.jdo.option.ConnectionDriverName</name>
            <value>com.mysql.jdbc.Driver</value>
            <description>Driver class name for a JDBC metastore</description>
    </property>

    <property>
            <name>javax.jdo.option.ConnectionUserName</name>
            <value>hive<value>
            <description>username to use against metastore database</description>
    </property>
    <property>
            <name>javax.jdo.option.ConnectionPassword</name>
            <value>hive</value>
            <description>password to use against metastore database</description>
    </property>
</configuration>

 

hadoop core-site.xml增加配置:

<!-- hive congfig  -->
        <property>
                <name>hadoop.proxyuser.hive.hosts</name>
                <value>*</value>
        </property>
        <property>
                <name>hadoop.proxyuser.hive.groups</name>
                <value>*</value>
        </property>
        <property>
                <name>hadoop.proxyuser.hdfs.hosts</name>
                <value>*</value>
        </property>
        <property>
                <name>hadoop.proxyuser.hdfs.groups</name>
                <value>*</value>
        </property>
        <property>
                <name>hadoop.proxyuser.HTTP.hosts</name>
                <value>*</value>
        </property>
        <property>
                <name>hadoop.proxyuser.HTTP.groups</name>
                <value>*</value>
         </property>

  同步是其他机器。

scp etc/hadoop/core-site.xml master2:/xxx/soft/hadoop-2.7.3/etc/hadoop/
scp etc/hadoop/core-site.xml slave2:/xxx/soft/hadoop-2.7.3/etc/hadoop/

  

JDBC下载:

wget https://dev.mysql.com/get/Downloads/Connector-J/mysql-connector-java-5.1.44.tar.gz
tar zxvf mysql-connector-java-5.1.44.tar.gz 

 

复制到hive lib目录:

cp mysql-connector-java-5.1.44/mysql-connector-java-5.1.44-bin.jar apache-hive-2.3.0-bin/lib/

 

客户端配置:

将hive拷贝至master2

scp -r apache-hive-2.3.0-bin/ master2:/xxx/soft/

  

在master2上:

vim conf/hive-site.xml 

<configuration>
    <property>
        <name>hive.metastore.uris</name>
        <value>thrift://master1:9083</value>
    </property>
</configuration>

  

启动hive:

初始化数据:

./bin/schematool -dbType mysql -initSchema

 

获取票据:

kinit -k -t /var/kerberos/krb5kdc/keytab/hive.keytab hive/[email protected]

启动server:

hive --service metastore &

  

 

  

  

以上是关于hive环境搭建的主要内容,如果未能解决你的问题,请参考以下文章

Spark环境搭建-----------数据仓库Hive环境搭建

使用docker搭建hive测试环境

spark--环境搭建--Hive0.13搭建

使用Spark创建HIVE-SQL练习环境原创首发

hive环境搭建

Hive基本原理及环境搭建