hive安装步骤详解

Posted 安静的技术控

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了hive安装步骤详解相关的知识,希望对你有一定的参考价值。

hive没有集群,只是一个client工具。只需要安装在一台主机上.

软件下载的地址:

mysql下载安装方式地址:链接:https://pan.baidu.com/s/1ddxqAzeTDs623xOr27ZeJw 提取码:isd1

hive下载地址:链接:https://pan.baidu.com/s/1bqARkuC2DGiQcswmuLVUxA 提取码:r8f0

  1. 基础安装步骤
    a. tar开文件到指定的目录下面

tar -zxvf apache-hive-2.1.1-bin.tar.gz -C /usr/local/software/
b. 创建软连接

ln -s /usr/local/software/apache-hive-2.1.1-bin /usr/local/software/hive

[root@s102 /usr/local/software]#ls -al
total 150712
drwxr-xr-x. 13 root root 4096 Jul 10 03:46 .
drwxr-xr-x. 14 root root 4096 Mar 14 10:22 …
drwxr-xr-x. 9 root root 4096 Jul 10 03:43 apache-hive-2.1.1-bin
-rw-r–r--. 1 root root 149756462 Jul 10 03:42 apache-hive-2.1.1-bin.tar.gz
lrwxrwxrwx. 1 root root 39 Mar 29 23:37 elasticsearch -> /usr/local/software/elasticsearch-6.8.6
drwxr-xr-x. 9 es es 4096 Mar 29 23:37 elasticsearch-5.4.3
drwxr-xr-x. 9 es es 4096 Mar 29 23:48 elasticsearch-6.8.6
-rw-r–r--. 1 root root 4502082 Mar 29 23:37 elasticsearch-analysis-ik-5.4.3.zip
drwxr-xr-x. 9 es es 4096 Mar 29 23:37 elasticsearch-head
lrwxrwxrwx. 1 root root 31 Sep 5 2020 flink -> /usr/local/software/flink-1.7.2
drwxr-xr-x. 9 root root 4096 Sep 5 2020 flink-1.7.2
-rw-r–r--. 1 root root 7435 Aug 23 2020 FlinkExample-1.0-SNAPSHOT.jar
lrwxrwxrwx. 1 root root 32 Jul 31 2020 hadoop -> /usr/local/software/hadoop-2.7.3
drwxr-xr-x. 11 root root 4096 Jul 31 2020 hadoop-2.7.3
lrwxrwxrwx. 1 root root 41 Jul 10 03:44 hive -> /usr/local/software/apache-hive-2.1.1-bin
lrwxrwxrwx. 1 root root 32 Jul 31 2020 jdk -> /usr/local/software/jdk1.8.0_65/
drwxr-xr-x. 8 root root 4096 Jul 31 2020 jdk1.8.0_65
drwxr-xr-x. 5 root root 4096 Jul 10 03:47 metastore_db
drwxr-xr-x. 10 mysql mysql 4096 Aug 15 2020 mysql
lrwxrwxrwx. 1 root root 31 Sep 6 2020 redis -> /usr/local/software/redis-3.2.8
drwxr-xr-x. 3 root root 16 Sep 6 2020 redis-3.2.8
lrwxrwxrwx. 1 root root 45 Jun 19 19:49 spark -> /usr/local/software/spark-2.2.0-bin-hadoop2.7
drwxr-xr-x. 14 root root 4096 Jun 19 22:47 spark-2.2.0-bin-hadoop2.7
-rw-r–r--. 1 root root 685 Jun 20 03:07 word.txt
c. 编辑/etc/profile文件

#最后一行
export HIVE_HOME=/usr/local/software/hive
export PATH= P A T H : PATH: PATH:HIVE_HOME/bin
d. 让环境变量生效

source /etc/profile

e.修改配置文件 cp hive-default.xml.template hive-site.xml (只有这么一个配置文件)

javax.jdo.option.ConnectionURL jdbc:mysql://s102:3306/hive?createDatabaseIfNotExist=true mysql数据库的链接地址 javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver mysql的jdbc驱动 javax.jdo.option.ConnectionUserName root mysql的登录用户名 javax.jdo.option.ConnectionPassword 123456 mysql的登录密码 hive.metastore.warehouse.dir hdfs://s101:9000/hive location of default database for the warehouse hive.server2.enable.doAs false 不需要做验证 hive.metastore.schema.verification false 不需要做验证 f. 从maven本地仓库复制mysql驱动到hive/lib下

先在pom.xml导入,然后在C:\\sjfxwj\\code location\\maven_local_repository\\mysql中下载

驱动的版本不用和mysql的版本一样

驱动下载地址:链接:https://pan.baidu.com/s/1MrYyKgw2HsomTNqb2vnBsg 提取码:ifoz

[root@s102 /usr/local/software/hive/lib]#ls -al |grep mysql
-rw-r–r--. 1 root root 968668 Aug 16 2020 mysql-connector-java-5.1.35.jar
[root@s102 /usr/local/software/hive/lib]#pwd
/usr/local/software/hive/lib
g. 初始化hive的数据结构

/usr/local/software/hive/bin/schematool -dbType mysql -initSchema
[root@s102 /usr/local/software/hive/bin]#./schematool -dbType mysql -initSchema
which: no hbase in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/local/software/jdk/bin:/usr/local/software/hadoop/bin:/usr/local/software/hadoop/sbin:/usr/local/software/flink/bin:/usr/local/software/zookeeper/bin:/usr/local/software/kafka/bin:/usr/local/software/flink/bin:/usr/local/software/mysql/bin:/usr/local/software/redis/bin:/usr/local/software/elasticsearch/bin:/usr/local/software/spark/bin::/usr/local/software/spark/sbin:/root/bin:/usr/local/software/jdk/bin:/usr/local/software/hadoop/bin:/usr/local/software/hadoop/sbin:/usr/local/software/flink/bin:/usr/local/software/zookeeper/bin:/usr/local/software/kafka/bin:/usr/local/software/flink/bin:/usr/local/software/mysql/bin:/usr/local/software/redis/bin:/usr/local/software/elasticsearch/bin:/usr/local/software/spark/bin::/usr/local/software/spark/sbin:/usr/local/software/hive/bin)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/software/apache-hive-2.1.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/software/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Metastore connection URL: jdbc:mysql://s102:3306/hive?createDatabaseIfNotExist=true
Metastore Connection Driver : com.mysql.jdbc.Driver
Metastore connection User: root
Starting metastore schema initialization to 2.1.0
Initialization script hive-schema-2.1.0.mysql.sql
Initialization script completed
schemaTool completed

  1. 登陆hive,进行数据验证
    $hive>create database mydb ; – 创建数据库
    $hive>use mydb ; – 使用库
    $hive>create table custs(id int , name string) ; – 建表
    $hive>desc custs ; – 查看表结构
    $hive>desc formatted custs ; – 查看格式化表结构
    $hive>insert into custs(id,name) values(1,‘tom’); – 插入数据,转成mr.
    $hive>select * from custs ; – 查询,没有mr

以上是关于hive安装步骤详解的主要内容,如果未能解决你的问题,请参考以下文章

hive安装步骤详解

Hive的三种安装模式简介及12步安装步骤详解

Hive_on_Spark安装配置详解

hive自定义UDF函数,步骤详解

Hive安装与配置详解

Hive安装与配置详解