k8s-外置ETCD集群部署

Posted shanhubei

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了k8s-外置ETCD集群部署相关的知识,希望对你有一定的参考价值。

如何把ETCD的数据库备份,以及还原的操作方法(待更新中)

地址:

Etcd 是一个分布式键值存储系统,Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库,为解决Etcd单点故障,应采用集群方式部署,这里使用3台组建集群,可容忍1台机器故障。为了节省机器,这里把3个ETCD实例分别部署在一个Matser节点和两个Node节点上。

ETCD实例        IP
etcd-1         172.23.199.15
etcd-2         172.23.199.16
etcd-3         172.23.199.17

1.准备cfssl证书生成工具

# 安装cfssl证书生成工具(如果下载不了,可尝试换成http协议)
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64

mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

2.生成ETCD证书

# 生成自签证书的根证书:创建文件夹
mkdir -p ~/TLS/etcd,k8s
cd /root/TLS/etcd
# 生成自签证书的根证书:文件夹中写入配置文件2个
# 配置文件1
cat > ca-config.json  ca-csr.json 
# 执行如下指令生成证书
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

接下来,使用自签的根证书签发ETCD HTTPS证书

#5、创建证书申请文件
cat > server-csr.json 

注意:上面配置文件中hosts字段中的IP为所有etcd节点的集群内部通信IP,为了方便后期扩容可以多写几个预留IP

# 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

3.下载ETCD的二进制文件

https://github.com/etcd-io/etcd/releases/download/v3.4.9/etcd-v3.4.9-linux-amd64.tar.gz

4.部署ETCD集群

以下操作只在节点1(Master)上进行一遍,至于节点2(node01)、节点3(node02),直接从节点1中把结果拷贝过去即可。

1)创建工作目录并解压二进制包

mkdir -pv /opt/etcd/bin,cfg,ssl
tar zxvf etcd-v3.4.9-linux-amd64.tar.gz
mv etcd-v3.4.9-linux-amd64/etcd,etcdctl /opt/etcd/bin/

2)创建ETCD配置文件

cat > /opt/etcd/cfg/etcd.conf 

配置文件中各个参数含义如下:

#ETCD_NAME:节点名称,集群中唯一
#ETCD_DATA_DIR:数据目录
#ETCD_LISTEN_PEER_URLS:集群通信监听地址
#ETCD_LISTEN_CLIENT_URLS:客户端访问监听地址
#ETCD_INITIAL_ADVERTISE_PEER_URLS:集群通告地址
#ETCD_ADVERTISE_CLIENT_URLS:客户端通告地址
#ETCD_INITIAL_CLUSTER:集群节点地址
#ETCD_INITIAL_CLUSTER_TOKEN:集群Token
#ETCD_INITIAL_CLUSTER_STATE:加入集群的当前状态,new是新集群,existing表示加入已有集群

3)systemd管理etcd

cat > /usr/lib/systemd/system/etcd.service 

4)拷贝证书到配置文件路径下

cp ~/TLS/etcd/ca*pem ~/TLS/etcd/server*pem /opt/etcd/ssl/

5)启动,并设置开机启动

systemctl daemon-reload && systemctl enable etcd && systemctl start etcd && systemctl status etcd

注意:systemctl start etcd 指令执行之后会“卡住”因为其他的ETCD节点还没部署呢。还有防火墙需要把 23792380 的端口开通起来,否则执行启动etcd服务也会卡住的。

输入以下的指令可以查看日志:

#把和etcd有关的日志保存到a文件中,再从a中检索和ip相关的内容,下图截取了其中的一部分
#注意:本步骤不是必须要执行的
journalctl -u etcd > a
cat a grep|172.23.199.15

6)将上述节点1所有生成的文件拷贝到节点2和节点3

#分别拷贝etcd的工作目录、管理服务的配置文件到另外两台主机
scp -r /opt/etcd/ root@k8s-node01:/opt/
scp /usr/lib/systemd/system/etcd.service root@k8s-node01:/usr/lib/systemd/system/

scp -r /opt/etcd/ root@k8s-node02:/opt/
scp /usr/lib/systemd/system/etcd.service root@k8s-node02:/usr/lib/systemd/system/

注意:需要修改ETCD配置文件参数(共5处修改点),分别修改另外2个ETCD的参数

vi /opt/etcd/cfg/etcd.conf
#[Member]
ETCD_NAME="etcd-1"                 # 修改1,节点2改为etcd-2,节点3改为etcd-3
ETCD_LISTEN_PEER_URLS="https://172.23.199.15:2380"    # 修改2 当前服务器IP
ETCD_LISTEN_CLIENT_URLS="https://172.23.199.15:2379"  # 修改3 当前服务器IP
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.23.199.15:2380" # 修改4 当前服务器IP
ETCD_ADVERTISE_CLIENT_URLS="https://172.23.199.15:2379"       # 修改5 当前服务器IP

启动并设置开机启动(同上):

systemctl daemon-reload && systemctl enable etcd && systemctl start etcd && systemctl status etcd

7)查看集群状态
在任意一个ETCD节点上输入如下指令,可以查看集群状态情况true表示健康(保证IP正确)

ETCDCTL_API=3 /opt/etcd/bin/etcdctl 
--cacert=/opt/etcd/ssl/ca.pem 
--cert=/opt/etcd/ssl/server.pem 
--key=/opt/etcd/ssl/server-key.pem 
--endpoints="https://172.23.199.15:2379,https://172.23.199.16:2379,https://172.23.199.17:2379" endpoint health 
--write-out=table

显示结果:

+----------------------------+--------+------------+-------+
|        ENDPOINT            | HEALTH |    TOOK    | ERROR |
+----------------------------+--------+------------+-------+
| https://172.23.199.15:2379 |   true | 6.801153ms |       |
| https://172.23.199.16:2379 |   true |  5.66978ms |       |
| https://172.23.199.17:2379 |   true | 5.644431ms |       |
+----------------------------+--------+------------+-------+

kubernetes容器集群部署Etcd集群

安装etcd

二进制包下载地址:https://github.com/etcd-io/etcd/releases/tag/v3.2.12

[[email protected] ~]# GOOGLE_URL=https://storage.googleapis.com/etcd
[[email protected] ~]# GITHUB_URL=https://github.com/coreos/etcd/releases/download
[[email protected] ~]# DOWNLOAD_URL=$GOOGLE_URL
[[email protected] ~]# ETCD_VER=v3.2.12
[[email protected] ~]# curl -L $DOWNLOAD_URL/$ETCD_VER/etcd-$ETCD_VER-linux-amd64.tar.gz -o /tmp/etcd-$ETCD_VER-linux-amd64.tar.gz


  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 10.0M  100 10.0M    0     0  2161k      0  0:00:04  0:00:04 --:--:-- 2789k
[[email protected] ~]# ls /tmp
etcd-v3.2.12-linux-amd64.tar.gz 
解压
[[email protected] ~]# tar -zxf /tmp/etcd-v3.2.12-linux-amd64.tar.gz 
[[email protected] ~]# ls
etcd-v3.2.12-linux-amd64 
创建集群部署目录
[[email protected] ~]# mkdir -p /opt/kubernetes/bin,cfg,ssl
[[email protected] ~]# tree /opt/kubernetes
/opt/kubernetes
├── bin
├── cfg
└── ssl
[[email protected] ~]# mv etcd-v3.2.12-linux-amd64/etcd /opt/kubernetes/bin
[[email protected] ~]# mv etcd-v3.2.12-linux-amd64/etcdctl /opt/kubernetes/bin
[[email protected] ~]# ls /opt/kubernetes/bin
etcd  etcdctl
添加配置文件
[[email protected] ~]# cat /opt/kubernetes/cfg/etcd
#[Member]
#指定etcd名称
ETCD_NAME="etcd03"
#数据目录
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#监听集群端口
ETCD_LISTEN_PEER_URLS="https://192.168.238.130:2380"
#监听数据端口
ETCD_LISTEN_CLIENT_URLS="https://192.168.238.130:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.238.130:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.238.130:2379"
#集群节点信息
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.238.129:2380,etcd02=https://192.168.238.128:2380;etcd03=https://192.168.238.130:2380"
#token
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

[[email protected] ~]# cat /usr/lib/systemd/system/etcd.service 
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=-/opt/kubernetes/cfg/etcd
ExecStart=/opt/kubernetes/bin/etcd --name=$ETCD_NAME --data-dir=$ETCD_DATA_DIR --listen-peer-urls=$ETCD_LISTEN_PEER_URLS --listen-client-urls=$ETCD_LISTENT_CLIENT_URLS,http://127.0.0.1:2379 --advertise-client-urls=$ETCD_ADVERTISE_CLIENT_URLS --initial-advertise-peer-urls=$ETCD_INITIAL_ADVERTISE_PEER_URLS --initial-cluster=$ETCD_INITIAL_CLUSTER --initial-cluster-token=$ETCD_INITIAL_CLUSTER --initial-cluster-state=new --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --peer-cert-file=/opt/kubernetes/ssl/server.pem --peer-key-file=/opt/kubernetes/ssl/server-key.pem --trusted-ca-file=/opt/kubernetes/ssl/ca.pem --peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

证书存放到指定目录
[[email protected] ~]# cp ssl/server*pem ssl/ca*pem /opt/kubernetes/ssl/
[[email protected] ~]# ls /opt/kubernetes/ssl/
ca-key.pem  ca.pem  server-key.pem  server.pem
启动etcd
[[email protected] ~]# systemctl start etcd
Job for etcd.service failed because the control process exited with error code. See "systemctl status etcd.service" and "journalctl -xe" for details.
启动失败查看日志
[[email protected] ~]# journalctl -u etcd
-- Logs begin at Tue 2019-07-02 17:22:07 EDT, end at Tue 2019-07-02 17:58:00 EDT. --
Jul 02 17:57:59 master systemd[1]: Starting Etcd Server...
Jul 02 17:57:59 master etcd[8172]: invalid value ",http://127.0.0.1:2379" for flag -listen-
Jul 02 17:57:59 master etcd[8172]: usage: etcd [flags]
Jul 02 17:57:59 master etcd[8172]: start an etcd server
Jul 02 17:57:59 master etcd[8172]: etcd --version
Jul 02 17:57:59 master etcd[8172]: show the version of etcd
Jul 02 17:57:59 master etcd[8172]: etcd -h | --help
Jul 02 17:57:59 master etcd[8172]: show the help information about etcd
Jul 02 17:57:59 master etcd[8172]: etcd --config-file
Jul 02 17:57:59 master etcd[8172]: path to the server configuration file
Jul 02 17:57:59 master etcd[8172]: etcd gateway
Jul 02 17:57:59 master etcd[8172]: run the stateless pass-through etcd TCP connection forwa
Jul 02 17:57:59 master etcd[8172]: etcd grpc-proxy
Jul 02 17:57:59 master etcd[8172]: run the stateless etcd v3 gRPC L7 reverse proxy
Jul 02 17:57:59 master systemd[1]: etcd.service: main process exited, code=exited, status=2
Jul 02 17:57:59 master systemd[1]: Failed to start Etcd Server.
Jul 02 17:57:59 master systemd[1]: Unit etcd.service entered failed state.
Jul 02 17:57:59 master systemd[1]: etcd.service failed.
Jul 02 17:57:59 master systemd[1]: etcd.service holdoff time over, scheduling restart.
Jul 02 17:57:59 master systemd[1]: Stopped Etcd Server.
Jul 02 17:57:59 master systemd[1]: Starting Etcd Server...
Jul 02 17:57:59 master etcd[8176]: invalid value ",http://127.0.0.1:2379" for flag -listen-
Jul 02 17:57:59 master etcd[8176]: usage: etcd [flags]
Jul 02 17:57:59 master etcd[8176]: start an etcd server
Jul 02 17:57:59 master etcd[8176]: etcd --version
Jul 02 17:57:59 master etcd[8176]: show the version of etcd
Jul 02 17:57:59 master etcd[8176]: etcd -h | --help
Jul 02 17:57:59 master etcd[8176]: show the help information about etcd
Jul 02 17:57:59 master etcd[8176]: etcd --config-file
Jul 02 17:57:59 master etcd[8176]: path to the server configuration file
Jul 02 17:57:59 master etcd[8176]: etcd gateway
Jul 02 17:57:59 master etcd[8176]: run the stateless pass-through etcd TCP connection forwa
Jul 02 17:57:59 master etcd[8176]: etcd grpc-proxy
Jul 02 17:57:59 master etcd[8176]: run the stateless etcd v3 gRPC L7 reverse proxy
Jul 02 17:57:59 master systemd[1]: etcd.service: main process exited, code=exited, status=2
Jul 02 17:57:59 master systemd[1]: Failed to start Etcd Server.
Jul 02 17:57:59 master systemd[1]: Unit etcd.service entered failed state.
Jul 02 17:57:59 master systemd[1]: etcd.service failed.
Jul 02 17:57:59 master systemd[1]: etcd.service holdoff time over, scheduling restart.
Jul 02 17:57:59 master systemd[1]: Stopped Etcd Server.
Jul 02 17:57:59 master systemd[1]: Starting Etcd Server...
lines 1-42

[[email protected] ~]# tail -n 20 /var/log/messages
Jul  2 17:58:00 localhost etcd: etcd --version
Jul  2 17:58:00 localhost etcd: show the version of etcd
Jul  2 17:58:00 localhost etcd: etcd -h | --help
Jul  2 17:58:00 localhost etcd: show the help information about etcd
Jul  2 17:58:00 localhost etcd: etcd --config-file
Jul  2 17:58:00 localhost etcd: path to the server configuration file
Jul  2 17:58:00 localhost etcd: etcd gateway
Jul  2 17:58:00 localhost etcd: run the stateless pass-through etcd TCP connection forwarding proxy
Jul  2 17:58:00 localhost etcd: etcd grpc-proxy
Jul  2 17:58:00 localhost etcd: run the stateless etcd v3 gRPC L7 reverse proxy
Jul  2 17:58:00 localhost systemd: etcd.service: main process exited, code=exited, status=2/INVALIDARGUMENT
Jul  2 17:58:00 localhost systemd: Failed to start Etcd Server.
Jul  2 17:58:00 localhost systemd: Unit etcd.service entered failed state.
Jul  2 17:58:00 localhost systemd: etcd.service failed.
Jul  2 17:58:00 localhost systemd: etcd.service holdoff time over, scheduling restart.
Jul  2 17:58:00 localhost systemd: Stopped Etcd Server.
Jul  2 17:58:00 localhost systemd: start request repeated too quickly for etcd.service
Jul  2 17:58:00 localhost systemd: Failed to start Etcd Server.
Jul  2 17:58:00 localhost systemd: Unit etcd.service entered failed state.
Jul  2 17:58:00 localhost systemd: etcd.service failed.

[[email protected] ~]# systemctl status etcd
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
   Active: activating (start) since Tue 2019-07-02 18:32:55 EDT; 16s ago
 Main PID: 8138 (etcd)
   Memory: 20.5M
   CGroup: /system.slice/etcd.service
           └─8138 /opt/kubernetes/bin/etcd --name=etcd03 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.238.130:2380 --listen-client-urls=https://192.168.238.13...

Jul 02 18:33:09 master etcd[8138]: a7e9807772a004c5 received MsgVoteResp from a7e9807772a004c5 at term 72
Jul 02 18:33:09 master etcd[8138]: a7e9807772a004c5 [logterm: 1, index: 3] sent MsgVote request to 203750a5948d27da at term 72
Jul 02 18:33:09 master etcd[8138]: a7e9807772a004c5 [logterm: 1, index: 3] sent MsgVote request to c858c42725f38881 at term 72
Jul 02 18:33:10 master etcd[8138]: health check for peer 203750a5948d27da could not connect: dial tcp 192.168.238.128:2380: i/o timeout
Jul 02 18:33:10 master etcd[8138]: health check for peer c858c42725f38881 could not connect: dial tcp 192.168.238.129:2380: i/o timeout
Jul 02 18:33:11 master etcd[8138]: a7e9807772a004c5 is starting a new election at term 72
Jul 02 18:33:11 master etcd[8138]: a7e9807772a004c5 became candidate at term 73
Jul 02 18:33:11 master etcd[8138]: a7e9807772a004c5 received MsgVoteResp from a7e9807772a004c5 at term 73
Jul 02 18:33:11 master etcd[8138]: a7e9807772a004c5 [logterm: 1, index: 3] sent MsgVote request to 203750a5948d27da at term 73
Jul 02 18:33:11 master etcd[8138]: a7e9807772a004c5 [logterm: 1, index: 3] sent MsgVote request to c858c42725f38881 at term 73
[[email protected] ~]# ps -ef|grep etcd
root       8138      1  0 18:32 ?        00:00:00 /opt/kubernetes/bin/etcd --name=etcd03 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.238.130:2380 --listen-client-urls=https://192.168.238.130:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.238.130:2379 --initial-advertise-peer-urls=https://192.168.238.130:2380 --initial-cluster=etcd01=https://192.168.238.129:2380,etcd02=https://192.168.238.128:2380,etcd03=https://192.168.238.130:2380 --initial-cluster-token=etcd01=https://192.168.238.129:2380,etcd02=https://192.168.238.128:2380,etcd03=https://192.168.238.130:2380 --initial-cluster-state=new --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --peer-cert-file=/opt/kubernetes/ssl/server.pem --peer-key-file=/opt/kubernetes/ssl/server-key.pem --trusted-ca-file=/opt/kubernetes/ssl/ca.pem --peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
root       8147   8085  0 18:34 pts/0    00:00:00 grep --color=auto etcd
到此主节点部署完成
生成节点间免密登陆密钥
[[email protected] ~]# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
1b:b9:49:23:fc:32:64:6f:72:bd:77:d5:98:28:d4:a0 [email protected]
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|          .      |
|         . o     |
|     .  E.. .    |
|      = S.   . o.|
|     o = B. . o o|
|      + O ..   . |
|       *   .. .  |
|          .. .   |
+-----------------+
[[email protected] ~]# ls /root/.ssh/
id_rsa  id_rsa.pub
分发密钥到各个节点
[[email protected] ~]# ssh-copy-id [email protected]
The authenticity of host '192.168.238.129 (192.168.238.129)' can't be established.
ECDSA key fingerprint is d2:7e:40:ca:2b:fb:be:53:f3:2c:8c:e7:54:08:3d:d4.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '[email protected]'"
and check to make sure that only the key(s) you wanted were added.

[[email protected] ~]# ssh-copy-id [email protected]
The authenticity of host '192.168.238.128 (192.168.238.128)' can't be established.
ECDSA key fingerprint is d2:7e:40:ca:2b:fb:be:53:f3:2c:8c:e7:54:08:3d:d4.
Are you sure you want to continue connecting (yes/no)? yes   
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '[email protected]'"
and check to make sure that only the key(s) you wanted were added.
测试免密登陆
[[email protected] ~]# ssh [email protected]
Last login: Tue Jul  2 17:23:09 2019 from 192.168.238.1
[[email protected] ~]# hostname
node01
节点1创建etcd安装目录
[[email protected] ~]# mkdir -p /opt/kubernetes/bin,cfg,ssl
主节点发送二进制包至node01
[[email protected] ~]# scp -r /opt/kubernetes/bin/ [email protected]:/opt/kubernetes/
etcd                                                                                                                                                       100%   17MB  17.0MB/s   00:00    
etcdctl                                                                                                                                                    100%   15MB  14.5MB/s   00:01    
node01查看文件
[[email protected] ~]# ls /opt/kubernetes/bin/
etcd  etcdctl
主节点发送配置文件至node01
[[email protected] ~]# scp -r /opt/kubernetes/cfg/ [email protected]:/opt/kubernetes/
etcd  
[[email protected] ~]# scp -r /usr/lib/systemd/system/etcd.service [email protected]:/usr/lib/systemd/system
etcd.service
node01查看文件
[[email protected] ~]# ls /opt/kubernetes/cfg/
etcd
[[email protected] ~]# ll /usr/lib/systemd/system/etcd.service  
-rw-r--r-- 1 root root 996 Jul  2 20:55 /usr/lib/systemd/system/etcd.service
主节点发送数字证书至node01
[[email protected] ~]# scp -r /opt/kubernetes/ssl/ [email protected]:/opt/kubernetes/
server-key.pem                                                                                                                                             100% 1675     1.6KB/s   00:00    
server.pem                                                                                                                                                 100% 1489     1.5KB/s   00:00    
ca-key.pem                                                                                                                                                 100% 1679     1.6KB/s   00:00    
ca.pem  
node01查看文件
[[email protected] ~]# ls /opt/kubernetes/ssl/
ca-key.pem  ca.pem  server-key.pem  server.pem
修改配置文件
[[email protected] ~]# cat /opt/kubernetes/cfg/etcd    
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.238.129:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.238.129:2379"

#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.238.129:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.238.129:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.238.129:2380,etcd02=https://192.168.238.128:2380,etcd03=https://192.168.238.130:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
启动
[[email protected] ~]# systemctl start etcd
[[email protected] ~]# ps -ef|grep etcd
root       8702      1  0 21:01 ?        00:00:00 /opt/kubernetes/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.238.129:2380 --listen-client-urls=https://192.168.238.129:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.238.129:2379 --initial-advertise-peer-urls=https://192.168.238.129:2380 --initial-cluster=etcd01=https://192.168.238.129:2380,etcd02=https://192.168.238.128:2380,etcd03=https://192.168.238.130:2380 --initial-cluster-token=etcd01=https://192.168.238.129:2380,etcd02=https://192.168.238.128:2380,etcd03=https://192.168.238.130:2380 --initial-cluster-state=new --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --peer-cert-file=/opt/kubernetes/ssl/server.pem --peer-key-file=/opt/kubernetes/ssl/server-key.pem --trusted-ca-file=/opt/kubernetes/ssl/ca.pem --peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
root       8709   7875  0 21:02 pts/0    00:00:00 grep --color=auto etcd
[[email protected] ~]# systemctl status etcd
● etcd.service - Etcd Server
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
   Active: activating (start) since Tue 2019-07-02 21:01:39 EDT; 54s ago
 Main PID: 8702 (etcd)
   Memory: 6.2M
   CGroup: /system.slice/etcd.service
           └─8702 /opt/kubernetes/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.238.129:2380 --listen-client-urls=https://192.168.238.12...

Jul 02 21:02:32 node01 etcd[8702]: c858c42725f38881 is starting a new election at term 36
Jul 02 21:02:32 node01 etcd[8702]: c858c42725f38881 became candidate at term 37
Jul 02 21:02:32 node01 etcd[8702]: c858c42725f38881 received MsgVoteResp from c858c42725f38881 at term 37
Jul 02 21:02:32 node01 etcd[8702]: c858c42725f38881 [logterm: 1, index: 3] sent MsgVote request to 203750a5948d27da at term 37
Jul 02 21:02:32 node01 etcd[8702]: c858c42725f38881 [logterm: 1, index: 3] sent MsgVote request to a7e9807772a004c5 at term 37
Jul 02 21:02:33 node01 etcd[8702]: c858c42725f38881 is starting a new election at term 37
Jul 02 21:02:33 node01 etcd[8702]: c858c42725f38881 became candidate at term 38
Jul 02 21:02:33 node01 etcd[8702]: c858c42725f38881 received MsgVoteResp from c858c42725f38881 at term 38
Jul 02 21:02:33 node01 etcd[8702]: c858c42725f38881 [logterm: 1, index: 3] sent MsgVote request to 203750a5948d27da at term 38
Jul 02 21:02:33 node01 etcd[8702]: c858c42725f38881 [logterm: 1, index: 3] sent MsgVote request to a7e9807772a004c5 at term 38
设置开机自启动
[[email protected] ~]# systemctl enable etcd
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
同理部署node02

查看集群状态

设置环境变量
[[email protected] ~]# tail -n 1 /etc/profile
PATH=/opt/kubernetes/bin:$PATH
[[email protected] ~]# source /etc/profile
[[email protected] ~]# which etcd
/opt/kubernetes/bin/etcd
[[email protected] ~]# which etcdctl
/opt/kubernetes/bin/etcdctl
[[email protected] ~]# etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --endpoints="https://192.168.238.130:2379,https://192.168.238.129:2379,https://192.168.238.128:2379" cluster-health
cluster may be unhealthy: failed to list members
Error:  client: etcd cluster is unavailable or misconfigured; error #0: client: endpoint https://192.168.238.130:2379 exceeded header timeout
; error #1: client: endpoint https://192.168.238.128:2379 exceeded header timeout
; error #2: client: endpoint https://192.168.238.129:2379 exceeded header timeout

error #0: client: endpoint https://192.168.238.130:2379 exceeded header timeout
error #1: client: endpoint https://192.168.238.128:2379 exceeded header timeout
error #2: client: endpoint https://192.168.238.129:2379 exceeded header timeout
失败的原因可能是防火墙或者selinux导致

[[email protected] ~]# etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --endpoints="https://192.168.238.130:2379,https://192.168.238.129:2379,https://192.168.238.128:2379" cluster-health
member 203750a5948d27da is healthy: got healthy result from https://192.168.238.128:2379
member a7e9807772a004c5 is healthy: got healthy result from https://192.168.238.130:2379
member c858c42725f38881 is healthy: got healthy result from https://192.168.238.129:2379
cluster is healthy

以上是关于k8s-外置ETCD集群部署的主要内容,如果未能解决你的问题,请参考以下文章

k8s部署etcd集群

使用kubeadm部署k8s集群02-配置etcd高可用

部署k8s ssl集群实践4:部署etcd集群

k8s系列-14-部署Etcd集群

k8s之external-etcd集群管理

企业级k8s集群部署