hive jdbc连接不成功。。报错org.apache.thrift.transport.TTransportException: Invalid status -128

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了hive jdbc连接不成功。。报错org.apache.thrift.transport.TTransportException: Invalid status -128相关的知识,希望对你有一定的参考价值。

用JDBC 连接hive,不成功。后台报错显示下面的日志。。。

[HiveServer2-Handler-Pool: Thread-28]: server.TThreadPoolServer (TThreadPoolServer.java:run(253)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Invalid status -128
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:227)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Caused by: org.apache.thrift.transport.TTransportException: Invalid status -128
at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:230)
at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:184)
at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:262)
at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
... 4 more

参考技术A jdbc和连接池对于你这个场景来说,都足够,既然用spring管理了,建议还是使用连接池,另外,spring自身没有实现连接池,一般都是对第三方连接池的包装,常见的有C3P0,dbcp以及最近比较流行的boneCP等,这几个配置都差不多太多,以boneCP为例:
<bean id="dataSource" class="com.jolbox.bonecp.BoneCPDataSource"
destroy-method="close">
<property name="driverClass" value="$jdbc.driverClass" />
<property name="jdbcUrl" value="$jdbc.url" />
<property name="username" value="$jdbc.user" />
<property name="password" value="$jdbc.password" />
<property name="idleConnectionTestPeriod" value="60" />
<property name="idleMaxAge" value="240" />
<property name="maxConnectionsPerPartition" value="30" />
<property name="minConnectionsPerPartition" value="10" />
<property name="partitionCount" value="2" />
<property name="acquireIncrement" value="5" />
<property name="statementsCacheSize" value="100" />
<property name="releaseHelperThreads" value="3" />
</bean>
<bean id="jdbcTemplate" class="org.springframework.jdbc.core.JdbcTemplate">
<property name="dataSource" ref="dataSource" />
</bean>本回答被提问者和网友采纳

通过远程jdbc方式连接到hive数据仓库

1.启动hiveserver2服务器,监听端口是10000,启动名令:hive --service hiveserver2 &;//将其放在后台进行运行,判断启动是否成功的标志是:jps,是否有RunJar进程,或者netstat -anop |grep 10000查看10000端口是否连接
,如果可以连接,那么就可以使用beeline通过$>hive service hiveserver2这个命令连接进来
2.通过beeline的命令行连接到hiveserver2,可以直接写$>beeline 等价于:$>hive --service beeline,连接到数据库:$beeline>!connect jdbc:hive2://localhost/mydb1连接到数据库hive数据库mydb1
3.!help,打印出帮助汇总
4.beeline下面的命令:!tables

hive命令

//创建表
$hive>create table if not exists t1(name string) comment ‘xx‘ row format delimited fields terminated by ‘,‘ stored as textfile;
//创建外部表

$hive>create external table if not exists t1(name string) comment ‘xx‘ row format delimited fields terminated by ‘,‘ stored as textfile;
//查看数据
$hive>desc t2 ;
$hive>desc formatted t2;
$hive>load data local inpath ‘/home/centos/customers.txt‘ into table t2 ;//从本地文件上传到hive表中,local是上传文件,
//复制表
$mysql>create table tt as select * from users ; //复制表,携带数据和表结构
$mysql>create table tt like users ; //复制表,只携带表结构,不带数据

hive>create table tt as select * from users;
hive>create table tt  like users ;
hive>select count(*) from users;    //这个需要转成mr进行处理,count(*) 查询要转成mr查询
hive>select id,name from t2 order by id desc ;//order by是全排序,要转成mr,以内需要进行聚合

分区表

hive的优化手段之一:创建分区表
在hive中数据库是目录,表是目录,分区表还是目录,表下的分区仍然是目录,从目录层面控制搜索数据的范围
创建分区表
$hive>create table t3(id int,name string ,age int) partitioned by (year int,month int) row format delimited fields terminated by ","
//显示分区表的分区信息
hive>show partitions t5;
//添加分区修改表
$hive>altertable t3 add partition (year=2014,month=1) partition(year =2015,month=2);
hdfs -lsr /;//添加完成之后查看目录
//加载数据到指定分区
$hive>load data local inpath ‘/home/centos/customers.txt‘ into table t5 partition(year=2015,month=3);











以上是关于hive jdbc连接不成功。。报错org.apache.thrift.transport.TTransportException: Invalid status -128的主要内容,如果未能解决你的问题,请参考以下文章

sqoop连接MySQL导入hdfs报错

主节点连接hiveserver2报错Error: Could not open client transport with JDBC Uri: jdbc:hive2://hadoop01:10000:

beeline链接hive报错

报错Error: Could not open client transport with JDBC Uri: jdbc:hive2://hadoop102:10000: Failed to open

jdbc连hive怎么批量插入

Hive连接JDBC时报错:java.lang.IllegalArgumentException: Unrecognized Hadoop major version number: 3.1.1