ElasticSearch - 无法加载超过 2.1 B 文档的索引
Posted
技术标签:
【中文标题】ElasticSearch - 无法加载超过 2.1 B 文档的索引【英文标题】:ElasticSearch - Unable to load an index with more than 2.1 B documents 【发布时间】:2021-03-04 13:05:46 【问题描述】:我停止了 elasticSearch 服务并重新启动它。现在,无法加载其中一个索引。以下是指标的状态:
yellow open test-index _taPEhm3ReSwCYjb1Y5hQA 1 1 3 0 7.5kb 7.5kb
yellow open wiki-index-000001 aaZvlpgJSuO43uMGKuyqKw 1 1 0 0 208b 208b
yellow open wiki-index-unique s1tU2HpnStWNobkZjQ7KQA 1 1 118098273 51827014 36.8gb 36.8gb
yellow open corpora-index ruArLOJoSv6HkKVBv-o1MA 1 1 289045137 0 47.3gb 47.3gb
red open corpora -86nIPPwS8K5IOpFYNcXBQ 1 1
yellow open simple_bulk 6N8NCLd5S5qOKf9o6R6YhA 1 1 6 0 9.6kb 9.6kb
yellow open test1 SN8ViALMRNGHkBF7o8-3zw 1 1 2 1 5.3kb 5.3kb
yellow open simple-index zYctGNhNRGWrnCOYpKHBcQ 1 1 1 0 4.5kb 4.5kb
当我查看集群的日志文件时,我发现这个错误:
[2020-11-20T23:52:54,498][INFO ][o.e.i.s.IndexShard ] [ilcompn0] [corpora][0] ignoring recovery of a corrupt translog entry
java.lang.IllegalArgumentException: number of documents in the index cannot exceed 2147483519
at org.apache.lucene.index.DocumentsWriterPerThread.reserveOneDoc(DocumentsWriterPerThread.java:211) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8dd$
at org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:232) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8$
at org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:419) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e -$
at org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1333) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ivera $
at org.apache.lucene.index.IndexWriter.softUpdateDocument(IndexWriter.java:1661) ~[lucene-core-8.6.2.jar:8.6.2 016993b65e393b58246d54e8ddda9f56a453eb0e - ive$
at org.elasticsearch.index.engine.InternalEngine.updateDocs(InternalEngine.java:1260) ~[elasticsearch-7.9.1.jar:7.9.1]
at org.elasticsearch.index.engine.InternalEngine.indexIntoLucene(InternalEngine.java:1091) ~[elasticsearch-7.9.1.jar:7.9.1]
at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:935) ~[elasticsearch-7.9.1.jar:7.9.1]
at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:819) ~[elasticsearch-7.9.1.jar:7.9.1]
at org.elasticsearch.index.shard.IndexShard.applyIndexOperation(IndexShard.java:791) ~[elasticsearch-7.9.1.jar:7.9.1]
at org.elasticsearch.index.shard.IndexShard.applyTranslogOperation(IndexShard.java:1526) ~[elasticsearch-7.9.1.jar:7.9.1]
at org.elasticsearch.index.shard.IndexShard.runTranslogRecovery(IndexShard.java:1557) [elasticsearch-7.9.1.jar:7.9.1]
at org.elasticsearch.index.shard.IndexShard.lambda$openEngineAndRecoverFromTranslog$9(IndexShard.java:1605) [elasticsearch-7.9.1.jar:7.9.1]
at org.elasticsearch.index.engine.InternalEngine.recoverFromTranslogInternal(InternalEngine.java:488) [elasticsearch-7.9.1.jar:7.9.1]
at org.elasticsearch.index.engine.InternalEngine.recoverFromTranslog(InternalEngine.java:463) [elasticsearch-7.9.1.jar:7.9.1]
at org.elasticsearch.index.engine.InternalEngine.recoverFromTranslog(InternalEngine.java:125) [elasticsearch-7.9.1.jar:7.9.1]
at org.elasticsearch.index.shard.IndexShard.openEngineAndRecoverFromTranslog(IndexShard.java:1610) [elasticsearch-7.9.1.jar:7.9.1]
at org.elasticsearch.index.shard.StoreRecovery.internalRecoverFromStore(StoreRecovery.java:436) [elasticsearch-7.9.1.jar:7.9.1]
at org.elasticsearch.index.shard.StoreRecovery.lambda$recoverFromStore$0(StoreRecovery.java:98) [elasticsearch-7.9.1.jar:7.9.1]
at org.elasticsearch.action.ActionListener.completeWith(ActionListener.java:325) [elasticsearch-7.9.1.jar:7.9.1]
at org.elasticsearch.index.shard.StoreRecovery.recoverFromStore(StoreRecovery.java:96) [elasticsearch-7.9.1.jar:7.9.1]
at org.elasticsearch.index.shard.IndexShard.recoverFromStore(IndexShard.java:1883) [elasticsearch-7.9.1.jar:7.9.1]
at org.elasticsearch.action.ActionRunnable$2.doRun(ActionRunnable.java:73) [elasticsearch-7.9.1.jar:7.9.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:710) [elasticsearch-7.9.1.jar:7.9.1]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.9.1.jar:7.9.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
at java.lang.Thread.run(Thread.java:832) [?:?]
关于如何恢复索引有什么建议吗?
【问题讨论】:
您运行的是哪个版本?一个分片不能包含超过 Integer.MAX 个文档。在最近的版本中,他们添加了一项检查以拒绝写入超过该限制的文档。但是,在您的情况下,您的版本似乎没有进行该检查,因此允许您超出限制。 @val:我使用的是 7.9.1 版本。那么有没有办法从索引中删除一些文档,以便我可以恢复其余的索引文档? 你只有一个分片?索引的大小是多少? @dadoonet:是的,我只有一个分片,它的大小约为 700GB。 @Val:这是另一个名为“corpora-index”的索引的大小,只有 280 M 个文档。 “语料库”索引包含超过 2.1 B 的文档,其大小约为 700 GB。 【参考方案1】:现在有一个带有损坏数据的tool that can help:
bin/elastisearch-shard remove-corrupted-data --index corpora --shard-id 0 --truncate-clean-translog
您将丢失所有尚未写入索引的待处理操作,但至少您可以打开索引并拆分它。
【讨论】:
谢谢。看来它应该工作。我的问题是要运行此命令,我需要由elasticsearch
用户运行它。我通过debian包(elastic.co/guide/en/elasticsearch/reference/7.10/…)安装了elasticsearch,它没有提到elasticsearch
用户的密码是什么。有什么办法可以找到吗?
你不需要使用 elasticsearch 用户运行
是的,它需要以elasticsearch
用户身份运行,但它没有任何密码。 runuser -l elasticsearch -c '...the command...'
你可以指定conf目录,它会起作用。像这样:ES_PATH_CONF=/etc/elasticsearch/bin/elasticsearch shard .....以上是关于ElasticSearch - 无法加载超过 2.1 B 文档的索引的主要内容,如果未能解决你的问题,请参考以下文章
解决 Elasticsearch 超过 10000 条无法查询的问题
无法加载 JNA 原生支持库 Elasticsearch 6.x