phoenix连接hbase数据库,创建二级索引报错:Error: org.apache.phoenix.exception.PhoenixIOException: Failed after atte
Posted QA-3K
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了phoenix连接hbase数据库,创建二级索引报错:Error: org.apache.phoenix.exception.PhoenixIOException: Failed after atte相关的知识,希望对你有一定的参考价值。
环境描述:
- 操作系统版本:CentOS release 6.5 (Final)
- 内核版本:2.6.32-431.el6.x86_64
- phoenix版本:phoenix-4.10.0
- hbase版本:hbase-1.2.6
- 表SYNC_BUSINESS_INFO_BYDAY数据库量:990万+
问题描述:
通过phoenix客户端连接hbase数据库,创建二级索引时,报下面的错误:
0: jdbc:phoenix:host-10-191-5-226> create index SYNC_BUSINESS_INFO_BYDAY_IDX_1 on SYNC_BUSINESS_INFO_BYDAY(day_id) include(id,channel_type,net_type,prov_id,area_id,city_code,channel_id,staff_id,trade_num,sync_file_name,sync_date);
Error: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:
Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60101: row \'20171231103826898918737449335226\' on table \'SYNC_BUSINESS_INFO_BYDAY\' at region=SYNC_BUSINESS_INFO_BYDAY,20171231103826898918737449335226,1516785469890.75008face58e29afbe41d239efc0e8eb., hostname=host-10-191-5-227,16020,1520250519822, seqNum=5005041 (state=08000,code=101)
org.apache.phoenix.exception.PhoenixIOException: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:
Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60101: row \'20171231103826898918737449335226\' on table \'SYNC_BUSINESS_INFO_BYDAY\' at region=SYNC_BUSINESS_INFO_BYDAY,20171231103826898918737449335226,1516785469890.75008face58e29afbe41d239efc0e8eb., hostname=host-10-191-5-227,16020,1520250519822, seqNum=5005041
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:852)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:796)
at org.apache.phoenix.iterate.ConcatResultIterator.getIterators(ConcatResultIterator.java:50)
at org.apache.phoenix.iterate.ConcatResultIterator.currentIterator(ConcatResultIterator.java:97)
at org.apache.phoenix.iterate.ConcatResultIterator.next(ConcatResultIterator.java:117)
at org.apache.phoenix.iterate.BaseGroupedAggregatingResultIterator.next(BaseGroupedAggregatingResultIterator.java:64)
at org.apache.phoenix.iterate.UngroupedAggregatingResultIterator.next(UngroupedAggregatingResultIterator.java:39)
at org.apache.phoenix.compile.UpsertCompiler$1.execute(UpsertCompiler.java:754)
at org.apache.phoenix.compile.DelegateMutationPlan.execute(DelegateMutationPlan.java:31)
at org.apache.phoenix.compile.PostIndexDDLCompiler$1.execute(PostIndexDDLCompiler.java:124)
at org.apache.phoenix.query.ConnectionQueryServicesImpl.updateData(ConnectionQueryServicesImpl.java:3332)
at org.apache.phoenix.schema.MetaDataClient.buildIndex(MetaDataClient.java:1302)
at org.apache.phoenix.schema.MetaDataClient.createIndex(MetaDataClient.java:1584)
at org.apache.phoenix.compile.CreateIndexCompiler$1.execute(CreateIndexCompiler.java:85)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:358)
at org.apache.phoenix.jdbc.PhoenixStatement$2.call(PhoenixStatement.java:341)
at org.apache.phoenix.call.CallRunner.run(CallRunner.java:53)
at org.apache.phoenix.jdbc.PhoenixStatement.executeMutation(PhoenixStatement.java:339)
at org.apache.phoenix.jdbc.PhoenixStatement.execute(PhoenixStatement.java:1511)
at sqlline.Commands.execute(Commands.java:822)
at sqlline.Commands.sql(Commands.java:732)
at sqlline.SqlLine.dispatch(SqlLine.java:813)
at sqlline.SqlLine.begin(SqlLine.java:686)
at sqlline.SqlLine.start(SqlLine.java:398)
at sqlline.SqlLine.main(SqlLine.java:291)
Caused by: java.util.concurrent.ExecutionException: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:
Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60101: row \'20171231103826898918737449335226\' on table \'SYNC_BUSINESS_INFO_BYDAY\' at region=SYNC_BUSINESS_INFO_BYDAY,20171231103826898918737449335226,1516785469890.75008face58e29afbe41d239efc0e8eb., hostname=host-10-191-5-227,16020,1520250519822, seqNum=5005041
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at org.apache.phoenix.iterate.BaseResultIterators.getIterators(BaseResultIterators.java:847)
... 24 more
Caused by: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:
Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60101: row \'20171231103826898918737449335226\' on table \'SYNC_BUSINESS_INFO_BYDAY\' at region=SYNC_BUSINESS_INFO_BYDAY,20171231103826898918737449335226,1516785469890.75008face58e29afbe41d239efc0e8eb., hostname=host-10-191-5-227,16020,1520250519822, seqNum=5005041
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
at org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:146)
at org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
at org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
at org.apache.phoenix.iterate.LookAheadResultIterator.peek(LookAheadResultIterator.java:73)
at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121)
at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.phoenix.exception.PhoenixIOException: Failed after attempts=36, exceptions:
Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60101: row \'20171231103826898918737449335226\' on table \'SYNC_BUSINESS_INFO_BYDAY\' at region=SYNC_BUSINESS_INFO_BYDAY,20171231103826898918737449335226,1516785469890.75008face58e29afbe41d239efc0e8eb., hostname=host-10-191-5-227,16020,1520250519822, seqNum=5005041
at org.apache.phoenix.util.ServerUtil.parseServerException(ServerUtil.java:113)
at org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:65)
at org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:139)
... 10 more
Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions:
Tue Mar 06 10:32:02 CST 2018, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60101: row \'20171231103826898918737449335226\' on table \'SYNC_BUSINESS_INFO_BYDAY\' at region=SYNC_BUSINESS_INFO_BYDAY,20171231103826898918737449335226,1516785469890.75008face58e29afbe41d239efc0e8eb., hostname=host-10-191-5-227,16020,1520250519822, seqNum=5005041
at org.apache.hadoop.hbase.client.RpcRetryingCallerWithReadReplicas.throwEnrichedException(RpcRetryingCallerWithReadReplicas.java:276)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:210)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:326)
at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:409)
at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:370)
at org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:55)
... 11 more
Caused by: java.net.SocketTimeoutException: callTimeout=60000, callDuration=60101: row \'20171231103826898918737449335226\' on table \'SYNC_BUSINESS_INFO_BYDAY\' at region=SYNC_BUSINESS_INFO_BYDAY,20171231103826898918737449335226,1516785469890.75008face58e29afbe41d239efc0e8eb., hostname=host-10-191-5-227,16020,1520250519822, seqNum=5005041
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:169)
at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65)
... 3 more
Caused by: java.io.IOException: Call to host-10-191-5-227/10.191.5.227:16020 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=153, waitTime=60001, operationTimeout=60000 expired.
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.wrapException(AbstractRpcClient.java:292)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1271)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:219)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:64)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:364)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:338)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:136)
... 4 more
Caused by: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=153, waitTime=60001, operationTimeout=60000 expired.
at org.apache.hadoop.hbase.ipc.Call.checkAndSetTimeout(Call.java:73)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1245)
... 13 more
问题分析:
通过以上的错误可以知道hbase.ipc.CallTimeout即在phoenix客户端进行操作的时候,存在调用的时间超过了默认设置的超时时间。
问题解决:
那么是关于phoenix使用hbase的一些设置,就需要在phoenix客户端的hbase-site.xml配置文件中进行修改或配置。
phoenix的hbase-site.xml文件的位置:
在网上找了很多这个问题的解决方法,但是就是没有解决。最后,在BING搜索一文章,有如下的参考配置。
修改过程:
1.修改phoenix的hbase-site.xml配置文件为如下:
<configuration> <property> <name>hbase.regionserver.wal.codec</name> <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value> </property> <property> <name>phoenix.query.timeoutMs</name> <value>1800000</value> </property> <property> <name>hbase.regionserver.lease.period</name> <value>1200000</value> </property> <property> <name>hbase.rpc.timeout</name> <value>1200000</value> </property> <property> <name>hbase.client.scanner.caching</name> <value>1000</value> </property> <property> <name>hbase.client.scanner.timeout.period</name> <value>1200000</value> </property> </configuration>
2.设置环境变量HBASE_CONF_PATH
[aiprd@host-10-191-5-227 phoenix-4.10.0]$ export HBASE_CONF_PATH=/mnt/aiprd/app/phoenix-4.10.0/bin
[aiprd@host-10-191-5-227 phoenix-4.10.0]$ echo $HBASE_CONF_PATH
/mnt/aiprd/app/phoenix-4.10.0/bin
备注:这个环境变量是指向hbase-site.xml的位置,如果不设置这个环境变量,可能使用到的还是默认的配置,我们要通过hbase-site.xml中的配置覆盖掉默认的配置。
另:如果不设置HBASE_CONF_PATH,比如以下的错误可能会报错:
0: jdbc:phoenix:host-10-191-5-226> create index SYNC_BUSINESS_INFO_BYDAY_IDX_1 on SYNC_BUSINESS_INFO_BYDAY(day_id) include(id,channel_type,net_type,prov_id,area_id,city_code,channel_id,staff_id,trade_num,sync_file_name,sync_date);
18/03/06 14:18:55 WARN client.ScannerCallable: Ignore, probably already closed
org.apache.hadoop.hbase.UnknownScannerException: org.apache.hadoop.hbase.UnknownScannerException: Unknown scanner \'2842\'. This can happen due to any of the following reasons: a) Scanner id given is wrong, b) Scanner lease expired because of long wait between consecutive client checkins, c) Server may be closing down, d) RegionServer restart during upgrade.
If the issue is due to reason (b), a possible fix would be increasing the value of\'hbase.client.scanner.timeout.period\' configuration.
at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2394)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:745)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:329)
at org.apache.hadoop.hbase.client.ScannerCallable.close(ScannerCallable.java:379)
at org.apache.hadoop.hbase.client.ScannerCallable.call(ScannerCallable.java:199)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:145)
at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas.call(ScannerCallableWithReplicas.java:60)
at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:210)
at org.apache.hadoop.hbase.client.ClientScanner.call(ClientScanner.java:326)
at org.apache.hadoop.hbase.client.ClientScanner.nextScanner(ClientScanner.java:264)
at org.apache.hadoop.hbase.client.ClientScanner.possiblyNextScanner(ClientScanner.java:247)
at org.apache.hadoop.hbase.client.ClientScanner.loadCache(ClientScanner.java:540)
at org.apache.hadoop.hbase.client.ClientScanner.next(ClientScanner.java:370)
at org.apache.phoenix.iterate.ScanningResultIterator.next(ScanningResultIterator.java:55)
at org.apache.phoenix.iterate.TableResultIterator.next(TableResultIterator.java:139)
at org.apache.phoenix.iterate.LookAheadResultIterator$1.advance(LookAheadResultIterator.java:47)
at org.apache.phoenix.iterate.LookAheadResultIterator.init(LookAheadResultIterator.java:59)
at org.apache.phoenix.iterate.LookAheadResultIterator.peek(LookAheadResultIterator.java:73)
at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:121)
at org.apache.phoenix.iterate.ParallelIterators$1.call(ParallelIterators.java:106)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.phoenix.job.JobManager$InstrumentedJobFutureTask.run(JobManager.java:183)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.UnknownScannerException): org.apache.hadoop.hbase.UnknownScannerException: Unknown scanner \'2842\'. This can happen due to any of the following reasons: a) Scanner id given is wrong, b) Scanner lease expired because of long wait between consecutive client checkins, c) Server may be closing down, d) RegionServer restart during upgrade.
If the issue is due to reason (b), a possible fix would be increasing the value of\'hbase.client.scanner.timeout.period\' configuration.
at org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2394)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:33648)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2196)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:112)
at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.had 以上是关于phoenix连接hbase数据库,创建二级索引报错:Error: org.apache.phoenix.exception.PhoenixIOException: Failed after atte的主要内容,如果未能解决你的问题,请参考以下文章 2021年大数据HBase:Apache Phoenix 二级索引 2021年大数据HBase:Apache Phoenix 二级索引