elasticsearch插件—— esm(数据迁移)
Posted 一曲广陵散
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了elasticsearch插件—— esm(数据迁移)相关的知识,希望对你有一定的参考价值。
一、参考
elasticsearch 学习系列目录——更新ing
github 源码 esm
二、下载安装
2.1 下载源码
esm-0.5
2.2 安装
(1) 通过源码安装
tar -xvzf esm-0.5.0.tar.gz
cd esm-0.5.0/
make
(2) 通过安装包安装
tar -xzvf darwin64.tar.gz
ls -ll bin/darwin64
esm
三、数据导出到本地
esm ./bin/darwin64/esm --help
Usage:
esm [OPTIONS]
Application Options:
-s, --source= source elasticsearch instance, ie: http://localhost:9200
-q, --query= query against source elasticsearch instance, filter data
before migrate, ie: name:medcl
-d, --dest= destination elasticsearch instance, ie: http://localhost:9201
-m, --source_auth= basic auth of source elasticsearch instance, ie: user:pass
-n, --dest_auth= basic auth of target elasticsearch instance, ie: user:pass
-c, --count= number of documents at a time: ie "size" in the scroll request
(10000)
--buffer_count= number of buffered documents in memory (1000000)
-w, --workers= concurrency number for bulk workers (1)
-b, --bulk_size= bulk size in MB (5)
-t, --time= scroll time (10m)
--sliced_scroll_size= size of sliced scroll, to make it work, the size should be > 1
(1)
-f, --force delete destination index before copying
-a, --all copy indexes starting with . and _
--copy_settings copy index settings from source
--copy_mappings copy index mappings from source
--shards= set a number of shards on newly created indexes
-x, --src_indexes= indexes name to copy,support regex and comma separated list
(_all)
-y, --dest_index= indexes name to save, allow only one indexname, original
indexname will be used if not specified
-u, --type_override= override type name
--green wait for both hosts cluster status to be green before dump.
otherwise yellow is okay
-v, --log= setting log level,options:trace,debug,info,warn,error (INFO)
-o, --output_file= output documents of source index into local file
-i, --input_file= indexing from local dump file
--input_file_type= the data type of input file, options: dump, json_line,
json_array, log_line (dump)
--source_proxy= set proxy to source http connections, ie: http://127.0.0.1:8080
--dest_proxy= set proxy to target http connections, ie: http://127.0.0.1:8080
--refresh refresh after migration finished
--fields= filter source fields, comma separated, ie: col1,col2,col3,...
--rename= rename source fields, comma separated, ie: _type:type,
name:myname
-l, --logstash_endpoint= target logstash tcp endpoint, ie: 127.0.0.1:5055
--secured_logstash_endpoint target logstash tcp endpoint was secured by TLS
--repeat_times= repeat the data from source N times to dest output, use align
with parameter regenerate_id to amplify the data size
-r, --regenerate_id regenerate id for documents, this will override the exist
document id in data source
--compress use gzip to compress traffic
-p, --sleep= sleep N seconds after each bulk request (-1)
Help Options:
-h, --help Show this help message
3.1 先创建测试数据
PUT yz_tracing-000001
POST _reindex
{
"source": {
"index": "test_tracing-000001"
},
"dest": {
"index": "yz_tracing-000001"
}
}
GET _tasks?actions=*reindex
./bin/darwin64/esm -s https://testhost:9243 -m "elastic:testpasswd" -x "yztest" -o "yz_oss_test-000001"
./bin/darwin64/esm -s http://testhost -m elastic:testpasswd -x yz_tracing-000001 -o "yz_oss_tracing-000001.txt"
以上是关于elasticsearch插件—— esm(数据迁移)的主要内容,如果未能解决你的问题,请参考以下文章