etcd原理
Posted Flytiger1220
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了etcd原理相关的知识,希望对你有一定的参考价值。
etcd简介
ETCD是用于共享配置和服务发现的分布式,一致性的KV存储系统。它是一个优秀的高可用分布式键值对存储数据库。etcd内部采用了Raft协议作为一致性算法,且使用Go实现。
ETCD作为一个受到ZooKeeper与docker启发而催生的项目,除了拥有与之类似的功能外,更专注于以下四点:
- 简单:基于HTTP+JSON的API让你用curl就可以轻松使用。
- 安全:可选SSL客户认证机制。
- 快速:每个实例每秒支持一千次写操作。
- 可信:使用Raft算法充分实现了分布式。
分布式系统中的数据分为控制数据和应用数据。etcd的使用场景默认处理的数据都是控制数据,对于应用数据,只推荐数据量很小,但是更新访问频繁的情况。
应用场景有如下几类:
- 场景一:服务发现(Service Discovery)
- 场景二:消息发布与订阅
- 场景三:负载均衡
- 场景四:分布式通知与协调
- 场景五:分布式锁、分布式队列
- 场景六:集群监控与Leader竞选
举个最简单的例子,如果你需要一个分布式存储仓库来存储配置信息,并且希望这个仓库读写速度快、支持高可用、部署简单、支持http接口,那么就可以使用etcd。
etcd工作原理
ETCD使用Raft协议来维护集群内各个节点状态的一致性。简单说,ETCD集群是一个分布式系统,由多个节点相互通信构成整体对外服务,每个节点都存储了完整的数据,并且通过Raft协议保证每个节点维护的数据是一致的。
每个ETCD节点都维护了一个状态机,并且,任意时刻至多存在一个有效的主节点。主节点处理所有来自客户端写操作,通过Raft协议保证写操作对状态机的改动会可靠的同步到其他节点。
ETCD工作原理核心部分在于Raft协议和watch机制。
Raft协议
Raft协议主要分为三个部分:选主,日志复制,安全性。
选主
Raft协议是用于维护一组服务节点数据一致性的协议。这一组服务节点构成一个集群,并且有一个主节点来对外提供服务。当集群初始化,或者主节点挂掉后,面临一个选主问题。集群中每个节点,任意时刻处于Leader, Follower, Candidate这三个角色之一。选举特点如下:
- 当集群初始化时候,每个节点都是Follower角色;
- 集群中存在至多1个有效的主节点,通过心跳与其他节点同步数据;
- 当Follower在一定时间内没有收到来自主节点的心跳,会将自己角色改变为Candidate,并发起一次选主投票;当收到包括自己在内超过半数节点赞成后,选举成功;当收到票数不足半数选举失败,或者选举超时。若本轮未选出主节点,将进行下一轮选举(出现这种情况,是由于多个节点同时选举,所有节点均为获得过半选票)。
- Candidate节点收到来自主节点的信息后,会立即终止选举过程,进入Follower角色。
为了避免陷入选主失败循环,每个节点未收到心跳发起选举的时间是一定范围内的随机值,这样能够避免2个节点同时发起选主。
日志复制
所谓日志复制,是指主节点将每次操作形成日志条目,并持久化到本地磁盘,然后通过网络IO发送给其他节点。其他节点根据日志的逻辑时钟(TERM)和日志编号(INDEX)来判断是否将该日志记录持久化到本地。当主节点收到包括自己在内超过半数节点成功返回,那么认为该日志是可提交的(committed),并将日志输入到状态机,将结果返回给客户端。
这里需要注意的是,每次选主都会形成一个唯一的TERM编号,相当于逻辑时钟。每一条日志都有全局唯一的编号。
主节点通过网络IO向其他节点追加日志。若某节点收到日志追加的消息,首先判断该日志的TERM是否过期,以及该日志条目的INDEX是否比当前以及提交的日志的INDEX跟早。若已过期,或者比提交的日志更早,那么就拒绝追加,并返回该节点当前的已提交的日志的编号。否则,将日志追加,并返回成功。
当主节点收到其他节点关于日志追加的回复后,若发现有拒绝,则根据该节点返回的已提交日志编号,发生其编号下一条日志。
主节点像其他节点同步日志,还作了拥塞控制。具体地说,主节点发现日志复制的目标节点拒绝了某次日志追加消息,将进入日志探测阶段,一条一条发送日志,直到目标节点接受日志,然后进入快速复制阶段,可进行批量日志追加。
按照日志复制的逻辑,我们可以看到,集群中慢节点不影响整个集群的性能。另外一个特点是,数据只从主节点复制到Follower节点,这样大大简化了逻辑流程。
安全性
截止此刻,选主以及日志复制并不能保证节点间数据一致。试想,当一个某个节点挂掉了,一段时间后再次重启,并当选为主节点。而在其挂掉这段时间内,集群若有超过半数节点存活,集群会正常工作,那么会有日志提交。这些提交的日志无法传递给挂掉的节点。当挂掉的节点再次当选主节点,它将缺失部分已提交的日志。在这样场景下,按Raft协议,它将自己日志复制给其他节点,会将集群已经提交的日志给覆盖掉。
这显然是不可接受的。
其他协议解决这个问题的办法是,新当选的主节点会询问其他节点,和自己数据对比,确定出集群已提交数据,然后将缺失的数据同步过来。这个方案有明显缺陷,增加了集群恢复服务的时间(集群在选举阶段不可服务),并且增加了协议的复杂度。
Raft解决的办法是,在选主逻辑中,对能够成为主的节点加以限制,确保选出的节点已定包含了集群已经提交的所有日志。如果新选出的主节点已经包含了集群所有提交的日志,那就不需要从和其他节点比对数据了。简化了流程,缩短了集群恢复服务的时间。
这里存在一个问题,加以这样限制之后,还能否选出主呢?答案是:只要仍然有超过半数节点存活,这样的主一定能够选出。因为已经提交的日志必然被集群中超过半数节点持久化,显然前一个主节点提交的最后一条日志也被集群中大部分节点持久化。当主节点挂掉后,集群中仍有大部分节点存活,那这存活的节点中一定存在一个节点包含了已经提交的日志了。
etcd和zk的区别
etcd和zk的区别如下所示:
除了以上区别之外,两者区别还包括:
- 运维方面:ETCD方便运维,ZK难以运维;
- 项目活跃度:ETCD社区与开发活跃,ZK已经快死了;
- API:ETCD提供HTTP+JSON, gRPC接口,跨平台跨语言,ZK需要使用其客户端;
- 访问安全方面:ETCD支持HTTPS访问,ZK在这方面缺失;
- ETCD主要用于存储关键数据的键值存储,zk用于管理配置等信息的中心化服务;
- ETCD更轻量级、更易用;
watch机制
这里重点提一下watch机制,watcher指的是订阅/通知,当一个值改变时,通知订阅过的节点,在etcd中是K/V值对的改变,在Zookeeper中是znode的改变(值改变、节点删除等)。
ZooKeeper
- watch children只能watch子节点,不能递归watch孙节点
- watch children只能watch子节点的创建和删除,不能watch子节点值的变化
- watch node只能对已经存在的node进行watch,对不存在的node需要watch existence。
除了上述的这些不足以外,在其官网文档中自己也提到,在watch被触发和重新设置之间发生的事件将被丢弃,无法被捕捉。
Etcd
Etcd支持单点watch,prefix watch以及ranged watch。和ZooKeeper不同,Etcd不会根据事件的不同而要求调用不同的watch API,三类watch的区别仅在于对key的处理不同:
- 单点watch仅对传入的单个key进行watch;
- ranged watch可以对传入的key的范围进行watch,范围内的key的事件都会被捕捉;
- prefix则可以对所有具有给定prefix的key进行watch。
zookeeper可以作为分布式存储吗?
在应用场景上,etcd和Zookeeper也很一致,难道Zookeeper本质上是分布式存储组件,为此,我查了下 Zookeeper是否可以作为分布式存储系统? 在知乎上的答案为 zookeeper只存元数据,总结几点原因如下:
- znode只能存1M以内的数据
- 写入性能低,为保证一致性,每次需要n/2+1的写入完成才算完成
- zookeeper的数据是全部存储在内存,只适合存元数据
- Zookeeper的使用场景是有高一致性的
zookeeper与etcd的对比
ETCD相关介绍–整体概念及原理方面
从零开始入门 K8s | 手把手带你理解 etcd
etcd命令行和客户端
etcd在键的组织上采用了层次化的空间结构(类似于文件系统中目录的概念),用户指定的键可以为单独的名字,如:testkey,此时实际上放在根目录/下面,也可以为指定目录结构,如/cluster1/node2/testkey,则将创建相应的目录结构。
put(创建/更新)
- put可以新增值,也可以更新值。
[root@localhost etcd-v3.5.1-linux-amd64]# etcdctl put key1 test1
OK
get(查询)
- 指定单个key的查询
[root@localhost etcd-v3.5.1-linux-amd64]# etcdctl get key2
key2
value2
- 指定范围查询(不包含结束的值)
[root@localhost etcd-v3.5.1-linux-amd64]# etcdctl get key2 key4
key2
value2
key3
value3
- 指定前缀查询
[root@localhost etcd-v3.5.1-linux-amd64]# etcdctl get ke --prefix
key1
value1
key2
value2
key3
value3
- 通过 -w 指定 json 的方式来输出
[root@localhost etcd-v3.5.1-linux-amd64]# etcdctl get key1 -w json | json_pp
"count" : 1,
"kvs" : [
"value" : "dGVzdDM=",
"version" : 4,
"mod_revision" : 93310,
"create_revision" : 93288,
"key" : "a2V5MQ=="
],
"header" :
"revision" : 93310,
"cluster_id" : 14841639068965178418,
"member_id" : 10276657743932975437,
"raft_term" : 3
del(删除)
- del命令用来删除数据,只需提供key即可。
[root@localhost etcd-v3.5.1-linux-amd64]# etcdctl del key1
1
watch(监控)
etcd 启动了 Watch 的机制,也就是我们前面提到的用于实现增量的数据更新,watch 也是有两种使用方法,第一种是指定单个 key 的 Watch,另一种是指定一个 key 的前缀。在实际应用场景的使用过程中,经常采用第二种;
分别打开两个窗口,一个窗口用来监听前缀为key的key,另一个窗口用来修改数据:
[root@localhost ~]# etcdctl put key2 value2
OK
[root@localhost ~]# etcdctl put key3 value3
OK
[root@localhost ~]# etcdctl del key3
1
[root@localhost etcd-v3.5.1-linux-amd64]# etcdctl watch key --prefix
PUT
key2
value2
PUT
key3
value3
DELETE
key3
可以看到所有的修改操作都会被记录。
txn(事务)
txn 支持从标准输入中读取多个请求,并将它们看做一个原子性的事务执行。事务是由条件列表,条件判断成功时的执行列表(条件列表中全部条件为真表示成功)和条件判断失败时的执行列表(条件列表中有一个为假即为失败)组成的。
[root@localhost ~]# etcdctl put user frank
OK
[root@localhost ~]# etcdctl txn -i
compares:
value("user") = "frank"
success requests (get, put, del):
put result1 ok
put result ok
put user frank123
failure requests (get, put, del):
SUCCESS
OK
OK
OK
[root@localhost ~]# etcdctl get result --prefix
result
ok
result1
ok
[root@localhost ~]# etcdctl get user
user
frank123
lease(租约)
key TTL(生存时间)是 etcd 的重要特性之一,即设置 key 的超时时间。与 Redis 不同,etcd 需要先创建 lease(租约),通过 put --lease= 设置(lease会自动到期)。而 lease 又由 TTL 管理,以此来实现 key 超时设置的功能。
[root@localhost ~]# etcdctl lease grant 30
lease 694d7e903949721e granted with TTL(30s)
[root@localhost ~]# etcdctl put --lease=694d7e903949721e foo bar
OK
[root@localhost ~]# etcdctl get foo
foo
bar
--------过30s之后,查询不到数据了----------
[root@localhost ~]# etcdctl get foo
--------过30s之后,lease也消失了----------
[root@localhost ~]# etcdctl put --lease=694d7e903949721e foo bar
"level":"warn","ts":"2022-02-10T14:45:23.936+0800","logger":"etcd-client","caller":"v3/retry_interceptor.go:62","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc000130a80/127.0.0.1:2379","attempt":0,"error":"rpc error: code = NotFound desc = etcdserver: requested lease not found"
Error: etcdserver: requested lease not found
深入理解 etcd - 基本原理解析
彻底搞懂 etcd 系列文章(五):etcdctl 的使用
etcd—操作手册
etcd集群安装
1、分别在三台虚拟机(192.168.131.48、192.168.131.50、192.168.131.50)上面执行:
[root@node1 ~]# yum -y install etcd
[root@node2 ~]# yum -y install etcd
[root@master ~]# yum -y install etcd
[root@master ~]# etcd --version
etcd Version: 3.2.22
Git SHA: 1674e68
Go Version: go1.9.4
Go OS/Arch: linux/amd64
2、配置文件修改
master节点
[root@master ~]# egrep -v "^$|^#" /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd" #etcd数据保存目录
ETCD_LISTEN_PEER_URLS="http://192.168.131.48:2380" #集群内部通信使用的URL
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" #供外部客户端使用的url
ETCD_NAME="etcd01" #etcd实例名称
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.131.48:2380" #广播给集群内其他成员访问的URL
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379" #广播给外部客户端使用的url
ETCD_INITIAL_CLUSTER="etcd01=http://192.168.131.48:2380,etcd02=http:// 192.168.131.49:2380,etcd03=http://192.168.131.50:2380" #初始集群成员列表
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #集群的名称
ETCD_INITIAL_CLUSTER_STATE="new" #初始集群状态,new为新建集群
node-1
[root@node1 etcd]# egrep -v "^$|^#" /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.131.49:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_NAME="etcd02"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.131.49:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_INITIAL_CLUSTER="etcd01=http://192.168.131.48:2380,etcd02=http:// 192.168.131.49:2380,etcd03=http://192.168.131.50:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="exist"
node-2
[root@node2 ~]# egrep -v "^$|^#" /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.131.50:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_NAME="etcd03"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.131.50:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_INITIAL_CLUSTER="etcd01=http://192.168.131.48:2380,etcd02=http:// 192.168.131.49:2380,etcd03=http://192.168.131.50:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="exist"
3、分别对etcd进行启动
[root@localhost home]# systemctl start etcd
[root@localhost home]# systemctl status etcd
● etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
Active: active (running) since 五 2022-01-28 10:06:34 CST; 56s ago
Main PID: 13026 (etcd)
CGroup: /system.slice/etcd.service
└─13026 /usr/bin/etcd --name=etcd02 --data-dir=/var/lib/etcd/default.etcd --listen-client-urls=http://0.0.0.0:2379
4、集群测试
[root@localhost home]# etcdctl member list
4382c9a4b9cc3a07: name=etcd01 peerURLs=http://192.168.131.48:2380 clientURLs=http://0.0.0.0:2379 isLeader=true
5b6b753d9a407c0f: name=etcd03 peerURLs=http://192.168.131.50:2380 clientURLs=http://0.0.0.0:2379 isLeader=false
c37247386b07c421: name=etcd02 peerURLs=http://192.168.131.49:2380 clientURLs=http://0.0.0.0:2379 isLeader=false
在一个节点设置值
[root@master etcd]# etcdctl set /test/key "test kubernetes"
test kubernetes
在另一个节点获取值
[root@node1 etcd]# etcdctl get /test/key
test kubernetes
etcd_v3库函数
new
syntax: cli, err = etcd.new([option:table])
• option:table
o protocol: string - v3.
o http_host: string - default http://127.0.0.1:2379
o ttl: int - default -1 default ttl for key operation. set -1 to disable ttl.
o key_prefix: string append this prefix path string to key operation url.
o timeout: int default request timeout seconds.
o api_prefix: string to suit etcd v3 api gateway. it will autofill by fetching etcd version if this option empty.
o ssl_verify: boolean - whether to verify the etcd certificate when originating TLS connection with etcd (if you want to communicate to etcd with TLS connection, use https scheme in your http_host), default is true.
o ssl_cert_path: string - path to the client certificate
o ssl_key_path: string - path to the client key
o serializer: string - serializer type, default json, also support raw to keep origin string value.
o extra_headers: table - adding custom headers for etcd requests.
o sni: string - adding custom SNI fot etcd TLS requests.
这个函数会创建一个etcd表或者一条错误信息,调用方式如下所示:
local cli, err = require(“resty.etcd”).new(protocol = “v3”)
get
syntax: res, err = cli:get(key:string[, opts])
• key: string value.
• opts: optional options.
o timeout: (int) request timeout seconds. Set to 0 would use lua_socket_connect_timeout as timeout.
o revision: (int) revision is the point-in-time of the key-value store to use for the range. If revision is less than or equal to zero, the range is over the newest key-value store. If the revision has been compacted, ErrCompacted is returned as a response.
这个函数会根据key查询对应的value,调用方式如下所示:
local res, err = cli:get(’/path/to/key’)
5.5.3. set
syntax: res, err = cli:set(key:string, val:JSON value [, opts:table])
• key: string value.
• val: the value which can be encoded via JSON.
• opts: optional options.
o timeout: (int) request timeout seconds. Set to 0 would use lua_socket_connect_timeout as timeout.
o lease: (int) the lease ID to associate with the key in the key-value store.
o prev_kv: (bool) If prev_kv is set, etcd gets the previous key-value pair before changing it. The previous key-value pair will be returned in the put response.
o ignore_value: (bool) If ignore_value is set, etcd updates the key using its current value. Returns an error if the key does not exist.
o ignore_lease: (bool) If ignore_lease is set, etcd updates the key using its current lease. Returns an error if the key does not exist.
这个函数会设置key和对应的value,如果key已存在就会覆盖,无视value之前的类型,调用方式如下所示:
local res, err = cli:set(’/path/to/key’, ‘val’, 10)
setnx
syntax: res, err = cli:setnx(key:string, val:JSON value [, opts:table])
• key: string value.
• val: the value which can be encoded via JSON.
• opts: optional options.
o timeout: (int) request timeout seconds. Set to 0 would use lua_socket_connect_timeout as timeout.
只有在key不存在时,才会为key设置指定的值,调用方式如下所示:
local res, err = cli:setnx(’/path/to/key’, ‘val’, 10)
5.5.5. setx
syntax: res, err = cli:setx(key:string, val:JSON value [, opts:table])
• key: string value.
• val: the value which can be encoded via JSON.
• opts: optional options.
o timeout: (int) request timeout seconds. Set to 0 would use lua_socket_connect_timeout as timeout.
设置key的值为value,并添加过期时间,如果key已存在,就会替换旧的值,调用方式如下所示:
local res, err = cli:setx(’/path/to/key’, ‘val’, 10)
delete
syntax: res, err = cli:delete(key:string [, opts:table])
• key: string value.
• opts: optional options.
o timeout: (int) request timeout seconds. Set to 0 would use lua_socket_connect_timeout as timeout.
o prev_kv: (bool) If prev_kv is set, etcd gets the previous key-value pairs before deleting it. The previous key-value pairs will be returned in the delete response.
删除一个键值对,如果设置了prev_kv,就会返回之前的值,然后再删除key,调用方式如下所示:
local res, err = cli:delete(’/path/to/key’)
watch
syntax: res, err = cli:watch(key:string [, opts:table])
• key: string value.
• opts: optional options.
o timeout: (int) request timeout seconds. Set to 0 would use lua_socket_connect_timeout as timeout.
o start_revision: (int) start_revision is an optional revision to watch from (inclusive). No start_revision is “now”.
o progress_notify: (bool) progress_notify is set so that the etcd server will periodically send a WatchResponse with no events to the new watcher if there are no recent events.
o filters: (slice of (enum FilterType NOPUT = 0;NODELETE = 1;)) filters filter the events at server side before it sends back to the watcher.
o prev_kv: (bool) If prev_kv is set, created watcher gets the previous KV before the event happens. If the previous KV is already compacted, nothing will be returned.
o watch_id: (int) If watch_id is provided and non-zero, it will be assigned to this watcher. Since creating a watcher in etcd is not a synchronous operation, this can be used to ensure that ordering is correct when creating multiple watchers on the same stream. Creating a watcher with an ID already in use on the stream will cause an error to be returned.
o fragment: (bool) fragment enables splitting large revisions into multiple watch responses.
o need_cancel: (bool) if watch need to be cancel, watch would return http_cli for further cancelation. See watchcancel for detail.
监听一个key的变化,调用方式如下所示:
local res, err = cli:watch(’/path/to/key’)
watchcancel
syntax: res, err = cli:watchcancel(http_cli:table)
• http_cli: the http client needs to revoke.
在监听超时之前取消监听,需要在监听的入口加上need_cancel=true。调用方式如下所示:
local res, err, http_cli = cli:watch(’/path/to/key’, need_cancel = true)
res = cli:watchcancel(http_cli)
readdir
syntax: res, err = cli:readdir(dir:string [, opts:table])
• key: string value.
• opts: optional options.
o timeout: (int) request timeout seconds. Set to 0 would use lua_socket_connect_timeout as timeout.
o revision: (int) revision is the point-in-time of the key-value store to use for the range. If revision is less than or equal to zero, the range is over the newest key-value store. If the revision has been compacted, ErrCompacted is returned as a response.
o limit: (int) limit is a limit on the number of keys returned for the request. When limit is set to 0, it is treated as no limit.
o sort_order: (int [SortNone:0, SortAscend:1, SortDescend:2]) sort_order is the order for returned sorted results.
o sort_target: (int [SortByKey:0, SortByVersion:1, SortByCreateRevision:2, SortByModRevision:3, SortByValue:4]) sort_target is the key-value field to use for sorting.
o keys_only: (bool) keys_only when set returns only the keys and not the values.
o count_only: (bool) count_only when set returns only the count of the keys in the range.
读取目录下所有的key-value,调用方式如下所示:
local res, err = cli:readdir(’/path/to/dir’)
watchdir
syntax: res, err = cli:watchdir(dir:string [, opts:table])
• key: string value.
• opts: optional options.
o timeout: (int) request timeout seconds. Set to 0 would use lua_socket_connect_timeout as timeout.
o start_revision: (int) start_revision is an optional revision to watch from (inclusive). No start_revision is “now”.
o progress_notify: (bool) progress_notify is set so that the etcd server will periodically send a WatchResponse with no events to the new watcher if there are no recent events.
o filters: (slice of [enum FilterType NOPUT = 0;NODELETE = 1;]) filters filter the events at server side before it sends back to the watcher.
o prev_kv: (bool) If prev_kv is set, created watcher gets the previous KV before the event happens. If the previous KV is already compacted, nothing will be returned.
o watch_id: (int) If watch_id is provided and non-zero, it will be assigned to this watcher. Since creating a watcher in etcd is not a synchronous operation, this can be used to ensure that ordering is correct when creating multiple watchers on the same stream. Creating a watcher with an ID already in use on the stream will cause an error to be returned.
o fragment: (bool) fragment enables splitting large revisions into multiple watch responses.
监听一个目录中所有key的变化,调用方式如下所示:
local res, err = cli:watchdir(’/path/to/dir’)
rmdir
syntax: res, err = cli:rmdir(dir:string [, opts:table])
• key: string value.
• opts: optional options.
o timeout: (int) request timeout seconds. Set to 0 would use lua_socket_connect_timeout as timeout.
o prev_kv: (bool) If prev_kv is set, etcd gets the previous key-value pairs before deleting it. The previous key-value pairs will be returned in the delete response.
删除一个目录中所有key的变化,调用方式如下所示:
local res, err = cli:rmdir(’/path/to/dir’)
txn
txn(事务操作)支持从标准输入中读取多个请求,并将它们看做一个原子性的事务执行。事务是由条件列表,条件判断成功时的执行列表(条件列表中全部条件为真表示成功)和条件判断失败时的执行列表(条件列表中有一个为假即为失败)组成的。
syntax: res, err = cli:txn(compare:array, success:array, failure:array [, opts:table])
• compare: array of table.
• success: array of table.
• failure: array of table.
• opts: optional options.
o timeout: (int) request timeout seconds. Set to 0 would use lua_socket_connect_timeout as timeout.
事务操作,以下操作会先判断compare,如果为true就执行success的内容,如果为false,不做任何操作:
local compare =
compare[1] =
compare[1].target = “CREATE”
compare[1].key = encode_base64(“test”)
compare[1].createRevision = 0
local success =
success[1] =
success[1].requestPut =
success[1].requestPut.key = encode_base64(“test”)
local res, err = cli:txn(compare, success, nil)
grant
syntax: res, err = cli:grant(TTL:int [, ID:int])
• TTL: advisory time-to-live in seconds.
• ID: the requested ID for the lease. If ID is set to 0, the lessor chooses an ID.
创建一个租约,如果在给定的生存期内没有收到keepalive,租约将到期。所有附加到租约的密钥都将过期,如果租约过期,这些密钥将被删除。每个过期的密钥都会在事件历史记录中生成一个删除事件。
– grant a lease with 5 second TTL
local res, err = cli:grant(5)
– attach key to lease, whose ID would be contained in res
local data, err = etcd:set(’/path/to/key’, ‘val’, lease = res.body.ID)
revoke
syntax: res, err = cli:revoke(ID:int)
• ID: the lease ID to revoke. When the ID is revoked, all associated keys will be deleted.
撤销一个租约,所有关联这个租约的key都会自动到期然后被删除。
local res, err = cli:grant(5)
local data, err = etcd:set(’/path/to/key’, ‘val’, lease = res.body.ID)
local data, err = etcd:revoke(res.body.ID)
local data, err = cli:get(’/path/to/key’)
– responce would contains no kvs
keepalive
syntax: res, err = cli:keepalive(ID:int)
• ID: the lease ID for the lease to keep alive.
通过将keepalive请求从客户端流式传输到服务器,并将keepalive响应从服务器流式传输到客户端,使租约保持活动状态。
leases
syntax: res, err = cli:leases()
列出所有存在的租约。
version
syntax: res, err = cli:version()
获取etcd的版本信息。
以上是关于etcd原理的主要内容,如果未能解决你的问题,请参考以下文章