cassandra插入二进制大文件超时问题
Posted 波子汽水yeah
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了cassandra插入二进制大文件超时问题相关的知识,希望对你有一定的参考价值。
单个文件达到20M的时候cassandra中插入就报错了(20M应该不是阈值,可能更小,三四兆还是没问题,没有具体去试)
com.datastax.driver.core.exceptions.OperationTimedOutException: [/112.93.116.174:9042] Timed out waiting for server response
at com.datastax.driver.core.exceptions.OperationTimedOutException.copy(OperationTimedOutException.java:44)
at com.datastax.driver.core.exceptions.OperationTimedOutException.copy(OperationTimedOutException.java:26)
at com.datastax.driver.core.DriverThrowables.propagateCause(DriverThrowables.java:37)
at com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:245)
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:64)
at com.ym.automaticstation.CountrywideAutomaticStationThread.insertRadarREF2Cass(CountrywideAutomaticStationThread.java:100)
at com.ym.automaticstation.CountrywideAutomaticStationThread.run(CountrywideAutomaticStationThread.java:65)
修改数据库配置文件 增大write_request_timeout_in_ms 到60秒
配置信息:
# How long the coordinator should wait for read operations to complete
#read_request_timeout_in_ms: 5000
read_request_timeout_in_ms: 60000
# How long the coordinator should wait for seq or index scans to complete
range_request_timeout_in_ms: 60000
# How long the coordinator should wait for writes to complete
#write_request_timeout_in_ms: 2000
write_request_timeout_in_ms: 60000
# How long the coordinator should wait for counter writes to complete
counter_write_request_timeout_in_ms: 60000
# How long a coordinator should continue to retry a CAS operation
# that contends with other proposals for the same row
cas_contention_timeout_in_ms: 3000
# How long the coordinator should wait for truncates to complete
# (This can be much longer, because unless auto_snapshot is disabled
# we need to flush first so we can snapshot before removing the data.)
truncate_request_timeout_in_ms: 60000
# The default timeout for other, miscellaneous operations
request_timeout_in_ms: 10000
修改数据库连接增加重试机制,问题没有解决!
RetryPolicy retryPolicy = DowngradingConsistencyRetryPolicy.INSTANCE;
cluster = Cluster.builder()
.addContactPoints(hosts)
.withCredentials("cassandra", "cassandra")
.withPort(PORT)
.withPoolingOptions(poolingOptions)
.withRetryPolicy(retryPolicy)
.build();
instance = cluster.connect();
最近上不了谷歌,百度搜到的参考很少。记录一下,也希望有经验的童鞋不吝指教!
后来改成了大文件压缩成zip再存储的。
以上是关于cassandra插入二进制大文件超时问题的主要内容,如果未能解决你的问题,请参考以下文章
Cassandra:游泳池很忙(没有可用的连接并且在10000 MILLISECONDS之后超时))