二.Hive数据库的安装
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了二.Hive数据库的安装相关的知识,希望对你有一定的参考价值。
======一.Hive数据库的安装======
<code>
1.首先需要安装以上hadoop环境。
2.安装mysql环境存储Hive的元数据,因为默认的元数据是存放在derby(只支持一个链接,用于测试)实际环境用mysql。
3.安装环境使用centos 6.5 IP为:192.168.0.12
</code>
======二.安装mysql数据库存储Hive元数据======
<code>
yum install mysql-server
mysql -uroot -p
create database hive;
update mysql.user set password=PASSWORD ('root') where User='root';
flush privileges;
</code>
======三.安装Hive======
<code>
需要Java环境,上述Hadoop已经配置。
cd /data/hadoop
wget -c http://114.242.101.2:808/hive/apache-hive-2.3.2-bin.tar.gz
tar xf apache-hive-2.3.2-bin.tar.gz
mv apache-hive-2.3.2-bin hive
chown -R hadoop:hadoop hive
设置Hive环境变量hadoop我已经设置
vim /etc/profile
#hive
export HIVE_HOME=/data/hadoop/hive
export PATH=$HIVE_HOME/bin:$PATH
soure /etc/profile
</code>
======四.修改Hive配置文件======
<code>
su - hadoop
cd /data/hadoop/hive/conf
mv hive-default.xml.template hive-site.xml
清空文件中<configuration></configuration>之间的内容并加入下列内容:
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://127.0.0.1:3306/hive?characterEncoding=UTF-8</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>root</value>
</property>
把MySQL的JDBC驱动包复制到Hive的lib目录下
cd /data/hadoop/hive/lib/
wget -c http://114.242.101.2:808/hive/mysql-connector-java-5.1.44-bin.jar
</code>
======五. Hive在HDFS上的默认存储路径======
<code>
官网说明:
Hive uses Hadoop, so:
you must have Hadoop in your path OR
export HADOOP_HOME=<hadoop-install-dir>
In addition, you must use below HDFS commands to create
/tmp and /user/hive/warehouse (aka hive.metastore.warehouse.dir)
and set them chmod g+w before you can create a table in Hive.
su - hadoop
cd /data/hadoop/hadoop-2.7.4
./bin/hadoop fs -mkdir /tmp
./bin/hadoop fs -mkdir -p /user/hive/warehouse
./bin/hadoop fs -chmod g+w /tmp
./bin/hadoop fs -chmod g+w /user/hive/warehouse
</code>
======六.运行Hive======
<code>
出现以下表明运行成功。
[[email protected] hadoop]$ hive
which: no hbase in (/data/hadoop/hadoop-2.7.4/bin:/data/hadoop/hive/bin:/usr/lib64/qt-3.3/bin:/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/sbin:/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.151-1.b12.el6_9.x86_64/bin:/home/hadoop/bin)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/data/hadoop/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data/hadoop/hadoop-2.7.4/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Logging initialized using configuration in jar:file:/data/hadoop/hive/lib/hive-common-2.3.2.jar!/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
hive>
>
>
>
</code>
======七.初始化Hive数据库======
<code>
su -hadoop
schematool -initSchema -dbType mysql
</code>
======八.运行Hive操作命令======
<code>
hive> CREATE TABLE pokes (foo INT, bar STRING);
hive> CREATE TABLE invites (foo INT, bar STRING) PARTITIONED BY (ds STRING);
hive> SHOW TABLES;
hive> SHOW TABLES '.*s';
hive> DESCRIBE invites;
hive> ALTER TABLE events RENAME TO 3koobecaf;
hive> ALTER TABLE pokes ADD COLUMNS (new_col INT);
hive> ALTER TABLE invites ADD COLUMNS (new_col2 INT COMMENT 'a comment');
hive> ALTER TABLE invites REPLACE COLUMNS (foo INT, bar STRING, baz INT COMMENT 'baz replaces new_col2');
hive> ALTER TABLE invites REPLACE COLUMNS (foo INT COMMENT 'only keep the first column');
hive> DROP TABLE pokes;
</code>
以上是关于二.Hive数据库的安装的主要内容,如果未能解决你的问题,请参考以下文章