在Cassandra nodetool修复期间打开的文件太多
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了在Cassandra nodetool修复期间打开的文件太多相关的知识,希望对你有一定的参考价值。
我在一个节点上创建了一个nodetool repair命令。此节点发生故障,日志文件显示以下错误消息:
INFO [STREAM-IN-/192.168.2.100] 2015-02-13 21:36:23,077 StreamResultFuture.java:180 - [Stream #8fb54551-b3bd-11e4-9620-4b92877f0505] Session with /192.168.2.100 is complete
INFO [STREAM-IN-/192.168.2.100] 2015-02-13 21:36:23,078 StreamResultFuture.java:212 - [Stream #8fb54551-b3bd-11e4-9620-4b92877f0505] All sessions completed
INFO [STREAM-IN-/192.168.2.100] 2015-02-13 21:36:23,078 StreamingRepairTask.java:96 - [repair #508bd650-b3bd-11e4-9620-4b92877f0505] streaming task succeed, returning response to node4/192.168.2.104
INFO [AntiEntropyStage:1] 2015-02-13 21:38:52,795 RepairSession.java:237 - [repair #508bd650-b3bd-11e4-9620-4b92877f0505] repcode is fully synced
INFO [AntiEntropySessions:27] 2015-02-13 21:38:52,795 RepairSession.java:299 - [repair #508bd650-b3bd-11e4-9620-4b92877f0505] session completed successfully
INFO [AntiEntropySessions:27] 2015-02-13 21:38:52,795 RepairSession.java:260 - [repair #03858e40-b3be-11e4-9620-4b92877f0505] new session: will sync node4/192.168.2.104, /192.168.2.100, /192.168.2.101 on range (8805399388216156805,8848902871518111273] for data.[repcode]
INFO [AntiEntropySessions:27] 2015-02-13 21:38:52,795 RepairJob.java:145 - [repair #03858e40-b3be-11e4-9620-4b92877f0505] requesting merkle trees for repcode (to [/192.168.2.100, /192.168.2.101, node4/192.168.2.104])
WARN [StreamReceiveTask:74] 2015-02-13 21:41:58,544 CLibrary.java:231 - open(/user/jlor/apache-cassandra/data/data/data/repcode-398f26f0b11511e49faf195596ed1fd9, O_RDONLY) failed, errno (23).
WARN [STREAM-IN-/192.168.2.101] 2015-02-13 21:41:58,672 CLibrary.java:231 - open(/user/jlor/apache-cassandra/data/data/data/repcode-398f26f0b11511e49faf195596ed1fd9, O_RDONLY) failed, errno (23).
WARN [STREAM-IN-/192.168.2.101] 2015-02-13 21:41:58,871 CLibrary.java:231 - open(/user/jlor/apache-cassandra/data/data/data/repcode-398f26f0b11511e49faf195596ed1fd9, O_RDONLY) failed, errno (23).
ERROR [StreamReceiveTask:74] 2015-02-13 21:41:58,986 CassandraDaemon.java:153 - Exception in thread Thread[StreamReceiveTask:74,5,main]
org.apache.cassandra.io.FSWriteError: java.io.FileNotFoundException: /user/jlor/apache-cassandra/data/data/data/repcode-398f26f0b11511e49faf195596ed1fd9/data-repcode-tmp-ka-245139-TOC.txt (Too many open files in system)
at org.apache.cassandra.io.sstable.SSTable.appendTOC(SSTable.java:282) ~[apache-cassandra-2.1.2.jar:2.1.2]
at org.apache.cassandra.io.sstable.SSTableWriter.close(SSTableWriter.java:483) ~[apache-cassandra-2.1.2.jar:2.1.2]
at org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:434) ~[apache-cassandra-2.1.2.jar:2.1.2]
at org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:429) ~[apache-cassandra-2.1.2.jar:2.1.2]
at org.apache.cassandra.io.sstable.SSTableWriter.closeAndOpenReader(SSTableWriter.java:424) ~[apache-cassandra-2.1.2.jar:2.1.2]
at org.apache.cassandra.streaming.StreamReceiveTask$OnCompletionRunnable.run(StreamReceiveTask.java:120) ~[apache-cassandra-2.1.2.jar:2.1.2]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_31]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) ~[na:1.8.0_31]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) ~[na:1.8.0_31]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_31]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_31]
Caused by: java.io.FileNotFoundException: /usr/jlo/apache-cassandra/data/data/data/repcode-398f26f0b11511e49faf195596ed1fd9/data-repcode-tmp-ka-245139-TOC.txt (Too many open files in system)
at java.io.FileOutputStream.open(Native Method) ~[na:1.8.0_31]
at java.io.FileOutputStream.<init>(FileOutputStream.java:213) ~[na:1.8.0_31]
at java.io.FileWriter.<init>(FileWriter.java:107) ~[na:1.8.0_31]
at org.apache.cassandra.io.sstable.SSTable.appendTOC(SSTable.java:276) ~[apache-cassandra-2.1.2.jar:2.1.2]
... 10 common frames omitted
我们有一个包含5个节点的小型集群:node0-node4。我有一个大约有34亿行的表,带有副本3.这是表描述:
CREATE TABLE data.repcode (
rep int,
type text,
code text,
yyyymm int,
trd int,
eq map<text, bigint>,
iq map<text, bigint>,
PRIMARY KEY ((rep, type, pcode), yyyymm, trd))
WITH CLUSTERING ORDER BY (yyyymm ASC, co_trd ASC, md5 ASC)
AND bloom_filter_fp_chance = 0.1
AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
AND comment = ''
AND compaction = {'min_threshold': '4', 'class': 'org.apache.cassandra.db.compaction.LeveledCompactionStrategy', 'max_threshold': '32'}
AND compression = {'sstable_compression': 'org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 864000
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
我正在使用Cassandra 2.1.2。我已将所有节点的最大打开文件限制设置为200'000。
在我发出nodetool repair命令之前,我已经计算了数据目录中的文件数。这是崩溃前每个节点的计数:
node0: 27'099
node1: 27'187
node2: 36'131
node3: 26'635
node4: 26'371
崩溃后现在:
node0: 946'555
node1: 973'531
node2: 844'211
node3: 1'024'147
node4: 1'971'772
一个unix目录中的文件数量增加到这样的延伸是正常的吗?我该怎么做才能避免这种情况?我将来如何避免这个问题?我应该增加打开文件的数量吗?这似乎对我来说已经很大了。我的群集对于这么多的记录来说太小了吗?我应该使用另一种压缩策略吗?
谢谢你的帮助。
ulimit -a | grep "open files"
的输出是多少?
以下针对Cassandra的recommended resource limits(ulimit)应设置如下(对于RHEL 6):
cassandra - memlock unlimited
cassandra - nofile 100000
cassandra - nproc 32768
cassandra - as unlimited
确切的文件和用户名将根据您的安装类型以及您运行Cassandra的用户而有所不同。上面假设这些行是来自/etc/security/limits.d/cassandra.conf
的打包安装,运行Cassandra作为“cassandra”用户(对于tarball安装,你需要/etc/security/limits.conf
)。
如果您的设置与此不同,请查看我上面链接的文档。请注意,如果以root用户身份运行Cassandra,则某些发行版需要为root用户显式设置限制。
编辑20180330
请注意,上述/etc/security/limit.conf
调整适用于CentOS / RHEL 6系统。否则,调整应在/etc/security/limits.d/cassandra.conf
进行。
以上是关于在Cassandra nodetool修复期间打开的文件太多的主要内容,如果未能解决你的问题,请参考以下文章
如果 Cassandra 配置为从不执行 gc 并且所有读取和写入都是仲裁,是不是需要 nodetool 修复?