inside a shard
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了inside a shard相关的知识,希望对你有一定的参考价值。
fsync sync
fsync/sync
sync is a standard system call in the Unix operating system, which commits to disk all data in the kernel filesystem buffers,data which has been scheduled for writing via low-level I/O system calls.Higher-level I/O layers such as stdio may maintain separate buffers of their own.The related system call fsync() commits just the buffered data relating to a specified file descriptor.[1] fdatasync() is also available to write out just the changes made to the data in the file, and not necessarily the file‘s related metadata.
inverted index
- 词汇列表
- 相似度算法
- 词汇:大小写,单数复数,同义词
每个被索引的字段都有倒排索引
every indexed field in a JSON document has its own inverted index.
写到磁盘上的inverted index是不可变的。不可变的好处有:
- 不需要加锁,不用考虑更新时的多线程问题
- 当index被写入filesystem cache的时候,就一直在cache里了,因为他是不变的,从内存里读取而不用访问磁盘,
提高性能 - 其他的缓存,比如filter cache,也将是一直有效的,因为index不变,所以其他的缓存也不需要在数据变化是跟着变化
- 允许数据压缩(Writing a single large inverted index allows the data to be compressed, reducing costly disk I/O and the amount of RAM needed to cache the index)
inverted index不可变的缺点
- 如果需要让一个新文档可以被搜索,就要重建整个索引,这就限制了一个index的数据容量和索引重建的频率
如何让索引可以更新,任然具有不可变的优势
- 使用更多的索引
- per-segment search的概念,一个片段是一个倒排索引。
一个shard有多个segments - 一个分片就是一个 Lucene index,es的一个索引是多个分片的集合
- 新文档存在缓存里的indexing buffer,每隔一段时间提交一次
indexing buffer提交时做什么
- A Lucene index 的内存缓冲区里的新文档准备提交
- 一个新的片段(a supplementary inverted index)写入到磁
- 这个new segment加入到commit point,缓冲区清空。
commit point lists all known segments - 所有在filesystem cache的数据被写入到文件(持久化)
查询时,所有的segment会被轮流查询,
segment是不可变的,没法从老的片段删除或添加文档。所以每个commit point有一个.del文件,
里面记录了哪个片段的哪个文档被删除了,当文档更新时,老版本的文档被表示删除,新版本的文档索引到新segment里
如果让变化的文档更快的searchable
瓶颈在于磁盘,提交一个new segment到磁盘需要fsync,fsync是昂贵的。
在es和磁盘直接的是filesystem cache,new segment先写入filesystemcache,之后在写入磁盘。
这个过程叫refresh,分片默认每秒refresh,配置参数refresh_interval,
PUT /my_logs
{
"settings": {
"refresh_interval": "30s"
}
}
这个参数可以动态的修改。可以在建立索引时关闭refresh,使用时打开
PUT /my_logs/_settings
{ "refresh_interval": -1 }
PUT /my_logs/_settings
{ "refresh_interval": "1s" }
持久化
full commit: 将在filesystem cache里的segment写入磁盘,commit point。用在失败后恢复
commit point lists all known segments,es在启动和重新打卡索引时,通过commit point知道segments属于哪个shards.
当full commit时文件改变了怎么办?
translog
translog记录了es发生的每个行为。
文档先添加到in-memory buffer, 再添加到translog.
refresh的时候,buffer清空,translog不变。
The docs in the in-memory buffer are written to a new segment, without an fsync.
The segment is opened to make it visible to search.
The in-memory buffer is cleared.
full commit
flush + create new translog
当translog太大或者一定时间后,index is flushed,创建新的translog.
- Any docs in the in-memory buffer are written to a new segment.
- The buffer is cleared.
- A commit point is written to disk.
- The filesystem cache is flushed with an fsync.
- The old translog is delete
es启动时,通过last commit point来恢复segments,接着重新执行translog里记录的操作
(When starting up, Elasticsearch will use the last commit point to recover known segments from disk, and will then replay all operations in the translog to add the changes that happened after the last commit.)
translog还被用来做实时的CRUD,当需要通过id retrieve, update, or delete a document,会先检查translog有没有更改,再去segment取文档。这就提供了实时访问最新的文档的方式。
full commit and truncating the translog is called flush.分片默认30分钟flush或当translog太大的时候
translog
- index.translog.sync_interval:
How often the translog is fsynced to disk and committed, regardless of write operations. Defaults to 5s. - index.translog.durability:
每次索引,删除,更新,或批量请求之后,是否需要fsync和提交translog,
request: (default) fsync and commit after every write request((e.g. index, delete, update, bulk).). In the event of hardware failure, all acknowledged writes will already have been committed to disk
async: (lose sync_interval‘s worth of data )fsync and commit in the background every sync_interval. In the event of hardware failure, all acknowledged writes since the last automatic commit will be discarded. - index.translog.fs.type:
Whether to buffer writes to the transaction log in memory or not. This setting accepts the following parameters:
buffered: (default) Translog writes first go to a 64kB buffer in memory, and are only written to the disk when the buffer is full, or when an fsync is triggered by a write request or the sync_interval.
simple: Translog writes are written to the file system immediately, without buffering. However, these writes will only be persisted to disk when an fsync and commit is triggered by a write request or the sync_interval.
segment merge
自动refresh每秒就创建一个segment,每次搜索都会查询每个segment,so,segment越多查询越慢。
es会在后台合并segment,小的合并到大的,这个时候那些已经删除的旧的文档就会从文件系统清除。删除的文档和旧版本的修改过的文档不会复制到新的大segment里
合并结束之后:
- The new segment is flushed to disk.
- A new commit point is written that includes the new segment and excludes the old, smaller 3. segments.
- The new segment is opened for search.
- The old segments are deleted.
optimize api
强制合并的api。强制分片的segment数量小于max_num_segments 参数。不应该在活跃的索引上使用。
POST /logstash-2014-10/_optimize?max_num_segments=1
optimize 出发的merge是完全没有限制的,他们可能用掉所有的I/O, If you plan on optimizing an index, you should use shard allocation (see Migrate Old Indices) to first move the index to a node where it is safe to run.
以上是关于inside a shard的主要内容,如果未能解决你的问题,请参考以下文章
如何使 tablayout 片段适合 INSIDE viewpager
Obj-C, UINavigationControllers inside a UITabbarController,简单解释一下?