hive 报错,其中一个hs2 正常

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了hive 报错,其中一个hs2 正常相关的知识,希望对你有一定的参考价值。

参考技术A hive 通过beeline ZK 方式连接后, 执行select count(*)  from table ,连上其中某一个hs2 正常, 另一个hs2,  报错,

 ERROR : Failed to execute tez graph.

检查 timeline log 发现有如下报错

解决方法:

查看ats-hbase状态

yarn app -status ats-hbase

方案一、尝试重启

yarn app -stop ats-hbase

yarn app -start ats-hbase

方案二、销毁重启yarn

yarn app -destroy ats-hbase

重启yarn

修改 hdfs ---->config---->proxy 参数为 *

修改yarn proxy 参数,都要改成*

EMR的hive包升级和gateway启动hs2

升级:


mkdir /opt/apps/hive
cd /opt/apps/hive
wget http://emr.oss-cn-hangzhou.aliyuncs.com/ecm-repo/hive/opay/apache-hive-2.3.5-bin.tar.gz
md5sum apache-hive-2.3.5-bin.tar.gz
229e32d99d8d288bdece8c897d6cc625 apache-hive-2.3.5-bin.tar.gz
tar xvf apache-hive-2.3.5-bin.tar.gz

1. Gateway升级Hive包
a.登录Gateway机器,将新的hive包解压到一个目录,比如/opt/apps/hive
b. cp /usr/lib/hive-current/lib/emr-hive-hook*jar /opt/apps/hive/apache-hive-2.3.5-bin/lib
c. unlink /usr/lib/hive-current
d. ln -s /opt/apps/hive/apache-hive-2.3.5-bin /usr/lib/hive-current
e. 建立其它软连接

ln -s /opt/apps/ecm/service/ranger/1.2.0-1.1.0/package/ranger-1.2.0-1.1.0/ranger-1.2.0-1.1.0-hive-plugin/lib/ranger-hive-plugin-impl /usr/lib/hive-current/lib/ranger-hive-plugin-impl
ln -s /opt/apps/ecm/service/ranger/1.2.0-1.1.0/package/ranger-1.2.0-1.1.0/ranger-1.2.0-1.1.0-hive-plugin/lib/ranger-hive-plugin-shim-1.2.0.jar /usr/lib/hive-current/lib/ranger-hive-plugin-shim-1.2.0.jar
ln -s /opt/apps/ecm/service/ranger/1.2.0-1.1.0/package/ranger-1.2.0-1.1.0/ranger-1.2.0-1.1.0-hive-plugin/lib/ranger-plugin-classloader-1.2.0.jar /usr/lib/hive-current/lib/ranger-plugin-classloader-1.2.0.jar

ln -s /usr/lib/spark-current/jars/scala-library-2.11.12.jar /usr/lib/hive-current/lib/scala-library-2.11.12.jar
ln -s /usr/lib/spark-current/jars/spark-core_2.11-2.4.3.jar /usr/lib/hive-current/lib/spark-core_2.11-2.4.3.jar
ln -s /usr/lib/spark-current/jars/spark-network-common_2.11-2.4.3.jar /usr/lib/hive-current/lib/spark-network-common_2.11-2.4.3.jar
ln -s /usr/lib/spark-current/jars/spark-unsafe_2.11-2.4.3.jar /usr/lib/hive-current/lib/spark-unsafe_2.11-2.4.3.jar

f. 测试一下是否正常

 

2. HiveServer升级Hive包
a. 登录emr-header-1, 将新的hive包解压到一个目录,比如/opt/apps/hive
b. cp /usr/lib/hive-current/lib/emr-hive-hook*jar /opt/apps/hive/apache-hive-2.3.5-bin/lib
c. unlink /usr/lib/hive-current
d. ln -s /opt/apps/hive/apache-hive-2.3.5-bin /usr/lib/hive-current
e. 建立其它软连接

ln -s /opt/apps/ecm/service/ranger/1.2.0-1.1.0/package/ranger-1.2.0-1.1.0/ranger-1.2.0-1.1.0-hive-plugin/lib/ranger-hive-plugin-impl /usr/lib/hive-current/lib/ranger-hive-plugin-impl
ln -s /opt/apps/ecm/service/ranger/1.2.0-1.1.0/package/ranger-1.2.0-1.1.0/ranger-1.2.0-1.1.0-hive-plugin/lib/ranger-hive-plugin-shim-1.2.0.jar /usr/lib/hive-current/lib/ranger-hive-plugin-shim-1.2.0.jar
ln -s /opt/apps/ecm/service/ranger/1.2.0-1.1.0/package/ranger-1.2.0-1.1.0/ranger-1.2.0-1.1.0-hive-plugin/lib/ranger-plugin-classloader-1.2.0.jar /usr/lib/hive-current/lib/ranger-plugin-classloader-1.2.0.jar

ln -s /usr/lib/spark-current/jars/scala-library-2.11.12.jar /usr/lib/hive-current/lib/scala-library-2.11.12.jar
ln -s /usr/lib/spark-current/jars/spark-core_2.11-2.4.3.jar /usr/lib/hive-current/lib/spark-core_2.11-2.4.3.jar
ln -s /usr/lib/spark-current/jars/spark-network-common_2.11-2.4.3.jar /usr/lib/hive-current/lib/spark-network-common_2.11-2.4.3.jar
ln -s /usr/lib/spark-current/jars/spark-unsafe_2.11-2.4.3.jar /usr/lib/hive-current/lib/spark-unsafe_2.11-2.4.3.jar

f. 登录emr-header-2,重复a~d的操作
g. 页面控制台重启HiveServer
h. 使用beeline/hue测试是否正常


gateway上启动hs2: (可选)
拷贝Ranger相关配置文件
以root用户执?以下命令
scp root@emr-header-1:/etc/ecm/hive-conf/ranger-hive-audit.xml /etc/ecm/hive-conf
scp root@emr-header-1:/etc/ecm/hive-conf/ranger-hive-security.xml /etc/ecm/hive-conf
scp root@emr-header-1:/etc/ecm/hive-conf/ranger-policymgr-ssl.xml /etc/ecm/hive-conf
scp root@emr-header-1:/etc/ecm/hive-conf/ranger-security.xml /etc/ecm/hive-conf
chown hadoop:hadoop /etc/ecm/hive-conf/ranger-*
启动HiveServer2
以hadoop用户执行以下命令

以上是关于hive 报错,其中一个hs2 正常的主要内容,如果未能解决你的问题,请参考以下文章

hive orc表增加字段之后,hetu查询报错

hive启动报错解决流程

安装atlas后执行hive命令报错

Apache Atlas 执行导入hive元数据脚本import-hive.sh报错

Hive3.1.1报错:The value of property yarn.resourcemanager.zk-address must not be null

记一次hive查询报错解决 No enum constant org.apache.parquet.hadoop.metadata.CompressionCodecName.LZOP