Hadoop3.1.3单机版安装Hive3.1.2(Redhat8.0)
Posted robinson1988
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Hadoop3.1.3单机版安装Hive3.1.2(Redhat8.0)相关的知识,希望对你有一定的参考价值。
下载 Hive3.1.2并上传到 /tmp apache-hive-3.1.2-bin.tar.gz
下载 mysql驱动并上传到 /tmp mysql-connector-java-5.1.48.tar.gz
安装MySQL8.0.19,在Redhat8.0上安装完MySQL8.0.19后,进入MySQL会报如下错误:
[root@server ~]# mysql -uroot -p
mysql: error while loading shared libraries: libtinfo.so.5: cannot open shared object file: No such file or directory
解决报错
find / -name libtinfo.so*
cd /usr/lib64/
ll libtinfo*
ln -s libtinfo.so.6.1 libtinfo.so.5
ll libtinfo*
创建在MySQL中创建hive专属账户(当然了你也可以不创建,直接用root)
create user 'hive'@'%' identified by 'hive';
grant all privileges on *.* to 'hive'@'%';
解压mysql驱动
cd /tmp
tar -zxvf mysql-connector-java-5.1.48.tar.gz
解压Hive3.12,并且将其移动到/usr/local/
cd /tmp
tar -xzvf apache-hive-3.1.2-bin.tar.g
mv apache-hive-3.1.2-bin /usr/local/hive
配置Hive环境变量
vi /etc/profile
export HIVE_HOME=/usr/local/hive
export PATH=$PATH:$HIVE_HOME/bin
使环境变量生效
source /etc/profile
启动hadoop
cd /usr/local/hadoop-3.1.3
sbin/start-all.sh
在HDFS中先创建好这两个目录,并且授权 777
hadoop fs -mkdir -p /user/hive/warehouse
hadoop fs -mkdir -p /tmp/hive
hadoop fs -chown -R 777 /user/hive/warehouse
hadoop fs -chown -R 777 /tmp/hive
hadoop fs -chmod -R 777 /tmp
hadoop fs -chmod -R 777 /user/hive/warehouse
cd /usr/local/hive
mkdir temp
chmod -R 777 temp
配置hive-site.xml
cd /usr/local/hive/conf
cp hive-default.xml.template hive-site.xml
vi hive-site.xml
<property>
<name>hive.exec.local.scratchdir</name>
<value>$system:java.io.tmpdir/$system:user.name</value>
<description>Local scratch space for Hive jobs</description>
</property>
把红色的改成/usr/local/hive/temp/root
<property>
<name>hive.downloaded.resources.dir</name>
<value>$system:java.io.tmpdir/$hive.session.id_resources</value>
<description>Temporary local directory for added resources in the remote file system.</description>
</property>
把红色的改成/usr/local/hive/temp/$hive.session.id_resources
<property>
<name>hive.server2.logging.operation.log.location</name>
<value>$system:java.io.tmpdir/$system:user.name/operation_logs</value>
<description>Top level directory where operation logs are stored if logging functionality is enabled</description>
</property>
把红色的改成/usr/local/hive/temp/root/operation_logs
<property>
<name>hive.querylog.location</name>
<value>$system:java.io.tmpdir/$system:user.name</value>
<description>Location of Hive run time structured log file</description>
</property>
把红色的改成/usr/local/hive/temp/root/
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=metastore_db;create=true</value>
<description>
JDBC connect string for a JDBC metastore.
To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
</description>
</property>
把红色的改成 jdbc:mysql://server:3306/hive?createDatabaseIfNotExist=true&useUnicode=true&characterEncoding=UTF-8&useSSL=false
注意:server是hostname,如果你的hostname不是server,改成自己的
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>APP</value>
<description>Username to use against metastore database</description>
</property>
把红色的改成hive
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>mine</value>
<description>password to use against metastore database</description>
</property>
把红色的改成hive
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>org.apache.derby.jdbc.EmbeddedDriver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
把红色的改成 com.mysql.jdbc.Driver
配置log4j
cp hive-log4j2.properties.template hive-log4j2.properties
vi hive-log4j2.properties
property.hive.log.dir = /usr/local/hive/temp/root
配置hive-env.sh
cd /usr/local/hive/conf/
cp hive-env.sh.template hive-env.sh
vi hive-env.sh
HADOOP_HOME= HADOOP_HOME=/usr/local/hadoop-3.1.3
export HIVE_CONF_DIR=/usr/local/hive/conf
export HIVE_AUX_JARS_PATH=/usr/local/hive/lib
拷贝MySQL驱动
cd /tmp/mysql-connector-java-5.1.48
cp mysql-connector-java-5.1.48-bin.jar /usr/local/hive/lib/
初始化MySQL
cd /usr/local/hive/bin
schematool -dbType mysql -initSchema
[root@server bin]# schematool -dbType mysql -initSchema
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop-3.1.3/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
at org.apache.hadoop.mapred.JobConf.setJar(JobConf.java:518)
at org.apache.hadoop.mapred.JobConf.setJarByClass(JobConf.java:536)
at org.apache.hadoop.mapred.JobConf.<init>(JobConf.java:430)
at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:5141)
at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:5104)
at org.apache.hive.beeline.HiveSchemaTool.<init>(HiveSchemaTool.java:96)
at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1473)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
解决报错
cd /usr/local/hadoop-3.1.3/share/hadoop/common/lib
ls guava*
cd /usr/local/hive/lib
ls guava*
rm -rf guava*
cd /usr/local/hadoop-3.1.3/share/hadoop/common/lib
ls guava*
cp guava-27.0-jre.jar /usr/local/hive/lib/guava-27.0-jre.jar
再次初始化MySQL
cd /usr/local/hive/bin
schematool -dbType mysql -initSchema
[root@server bin]# schematool -dbType mysql -initSchema
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hive/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop-3.1.3/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Exception in thread "main" java.lang.RuntimeException: com.ctc.wstx.exc.WstxParsingException: Illegal character entity: expansion character (code 0x8
at [row,col,system-id]: [3215,96,"file:/usr/local/hive/conf/hive-site.xml"]
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3024)
at org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2973)
at org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2848)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:1460)
at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:4996)
at org.apache.hadoop.hive.conf.HiveConf.getVar(HiveConf.java:5069)
at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:5156)
at org.apache.hadoop.hive.conf.HiveConf.<init>(HiveConf.java:5104)
at org.apache.hive.beeline.HiveSchemaTool.<init>(HiveSchemaTool.java:96)
at org.apache.hive.beeline.HiveSchemaTool.main(HiveSchemaTool.java:1473)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.util.RunJar.run(RunJar.java:318)
at org.apache.hadoop.util.RunJar.main(RunJar.java:232)
Caused by: com.ctc.wstx.exc.WstxParsingException: Illegal character entity: expansion character (code 0x8
at [row,col,system-id]: [3215,96,"file:/usr/local/hive/conf/hive-site.xml"]
at com.ctc.wstx.sr.StreamScanner.constructWfcException(StreamScanner.java:621)
at com.ctc.wstx.sr.StreamScanner.throwParseError(StreamScanner.java:491)
at com.ctc.wstx.sr.StreamScanner.reportIllegalChar(StreamScanner.java:2456)
at com.ctc.wstx.sr.StreamScanner.validateChar(StreamScanner.java:2403)
at com.ctc.wstx.sr.StreamScanner.resolveCharEnt(StreamScanner.java:2369)
at com.ctc.wstx.sr.StreamScanner.fullyResolveEntity(StreamScanner.java:1515)
at com.ctc.wstx.sr.BasicStreamReader.nextFromTree(BasicStreamReader.java:2828)
at com.ctc.wstx.sr.BasicStreamReader.next(BasicStreamReader.java:1123)
at org.apache.hadoop.conf.Configuration$Parser.parseNext(Configuration.java:3320)
at org.apache.hadoop.conf.Configuration$Parser.parse(Configuration.java:3114)
at org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:3007)
... 15 more
解决报错
cd /usr/local/hive/conf
vi hive-site.xml
输入 3215 按住键盘 shift + G 跳到这一行
<property>
<name>hive.txn.xlock.iow</name>
<value>true</value>
<description>
Ensures commands with OVERWRITE (such as INSERT OVERWRITE) acquire Exclusive locks fortransactional tables. This ensures that inserts (w/o overwrite) running concurrently
are not hidden by the INSERT OVERWRITE.
</description>
</property>
删掉红色部分,变成 for transactional tables
再次初始化MySQL
cd /usr/local/hive/bin
schematool -dbType mysql -initSchema
删掉hive的Log4J
cd /usr/local/hive/lib
ls log4j-slf4j-impl-2.10.0.jar
rm -rf log4j-slf4j-impl-2.10.0.jar
输入hive,出现这个界面就安装成功
[root@server ~]# hive
which: no hbase in (/usr/share/Modules/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/local/jdk1.8.0_241/bin:/usr/local/hadoop-3.1.3/bin:/usr/local/hadoop-3.1.3/sbin:/root/bin:/usr/local/jdk1.8.0_241/bin:/usr/local/hadoop-3.1.3/bin:/usr/local/hadoop-3.1.3/sbin:/usr/local/mysql/bin:/usr/local/mysql/lib:/usr/local/jdk1.8.0_241/bin:/usr/local/hadoop-3.1.3/bin:/usr/local/hadoop-3.1.3/sbin:/usr/local/mysql/bin:/usr/local/mysql/lib:/usr/local/hive/bin)
Hive Session ID = 64c2dd98-fe68-4327-a14f-69cd2c90f882
Logging initialized using configuration in file:/usr/local/hive/conf/hive-log4j2.properties Async: true
Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.
Hive Session ID = a4eff00a-2fc7-42d5-994e-f7f3adc6f4cf
hive>
以上是关于Hadoop3.1.3单机版安装Hive3.1.2(Redhat8.0)的主要内容,如果未能解决你的问题,请参考以下文章