k8s单节点和多节点部署

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了k8s单节点和多节点部署相关的知识,希望对你有一定的参考价值。

k8s单节点部署

参考文档

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.13.md#downloads-for-v1131
https://kubernetes.io/docs/home/?path=users&persona=app-developer&level=foundational
https://github.com/etcd-io/etcd
https://shengbao.org/348.html
https://github.com/coreos/flannel
http://www.cnblogs.com/blogscc/p/10105134.html
https://blog.csdn.net/xiegh2014/article/details/84830880
https://blog.csdn.net/tiger435/article/details/85002337
https://www.cnblogs.com/wjoyxt/p/9968491.html
https://blog.csdn.net/zhaihaifei/article/details/79098564
http://blog.51cto.com/jerrymin/1898243
http://www.cnblogs.com/xuxinkun/p/5696031.html

1.环境规划

软件 版本
linux centos7.4
kubernetes 1.14
docker 18
etcd 3.3
角色 IP 组件
master 172.16.1.43 kube-apiserver kube-controller-manager kube-scheduler etcd
node1 172.16.1.44 kubelet kube-proxy docker flannel etcd
node2 172.16.1.45 kubelet kube-proxy docker flannel etcd

2.安装docker

在两个node节点上安装docker

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum list docker-ce --showduplicates | sort -r
yum install docker-ce -y
systemctl start docker && systemctl enable docker

添加国内的镜像源

vim /etc/docker/daemon.json 

  "registry-mirrors": ["https://registry.docker-cn.com"]

3.自签TLS证书

组件 使用的证书
etcd ca.pem server.pem server-key.pem
kube-apiserver ca.pem server.pem server-key.pem
kubelet ca.pem ca-key.pem
kube-proxy ca.pem kube-proxy.pem kube-proxy-key.pem
kubectl ca.pem admin.pem admin-key.pem
flannel ca.pem server.pem server-key.pem

1) 安装证书生成工具cfssl

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

2)创建目录/opt/tools/cfssl,所有的证书都在这里创建

mkdir /opt/tools/cfssl

3.1 创建etcd证书

1) ca配置

cat > ca-config.json <<EOF

  "signing": 
    "default": 
      "expiry": "87600h"
    ,
    "profiles": 
      "kubernetes": 
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      
    
  

EOF

2)ca证书请求文件

cat > ca-csr.json <<EOF

    "CN": "kubernetes",
    "key": 
        "algo": "rsa",
        "size": 2048
    ,
    "names": [
        
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "K8s",
            "OU": "System"
        
    ]

EOF

3)生产ca证书和私钥,初始化ca

cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

查看ca证书

[root@k8s-master cfssl]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

3.2 创建server证书

1)server证书请求文件

cat > server-csr.json <<EOF

    "CN": "kubernetes",
    "hosts": [
    "127.0.0.1",
    "172.16.1.43",
    "172.16.1.44",
    "172.16.1.45",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
    ],
    "key": 
        "algo": "rsa",
        "size": 2048
    ,
    "names": [
        
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "K8s",
            "OU": "System"
        
    ]

EOF

2) 生成server证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

查看server证书

[root@k8s-master cfssl]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem  server.csr  server-csr.json  server-key.pem  server.pem

3.3 创建kube-proxy证书

1)kube-proxy证书请求文件

cat > kube-proxy-csr.json <<EOF

    "CN": "system:kube-proxy",
    "hosts": [],
    "key": 
        "algo": "rsa",
        "size": 2048
    ,
    "names": [
        
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "K8s",
            "OU": "System"
        
    ]

EOF

2)生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

创建客户端证书

1)admin证书请求文件

cat > admin-csr.json <<EOF

    "CN": "admin",
    "hosts": [],
    "key": 
        "algo": "rsa",
        "size": 2048
    ,
    "names": [
        
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "system:masters",
            "OU": "System"
        
    ]

EOF

2)生成证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

4.安装etcd

三台都需要安装,实现高可用

1)下载etcd

wget https://github.com/etcd-io/etcd/releases/download/v3.3.12/etcd-v3.3.12-linux-amd64.tar.gz
tar xf etcd-v3.3.12-linux-amd64
cp etcd etcdctl /opt/kubernetes/bin/

2)将kubernetes的bin目录加入环境变量,方便以后的使用

3)编辑配置文件

vim /opt/kubernetes/cfg/etcd

#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://172.16.1.43:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.16.1.43:2379"
 
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.16.1.43:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://172.16.1.43:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://172.16.1.43:2380,etcd02=https://172.16.1.44:2380,etcd03=https://172.16.1.45:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

4)编辑启动脚本

vim /usr/lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=-/opt/kubernetes/cfg/etcd
# set GOMAXPROCS to number of processors
ExecStart=/opt/kubernetes/bin/etcd \\
--name=$ETCD_NAME \\
--data-dir=$ETCD_DATA_DIR \\
--listen-client-urls=$ETCD_LISTEN_CLIENT_URLS,http://127.0.0.1:2379 \\
--listen-peer-urls=$ETCD_LISTEN_PEER_URLS \\
--advertise-client-urls=$ETCD_ADVERTISE_CLIENT_URLS \\
--initial-cluster-token=$ETCD_INITIAL_CLUSTER_TOKEN \\
--initial-cluster=$ETCD_INITIAL_CLUSTER \\
--initial-cluster-state=new \\
--cert-file=/opt/kubernetes/ssl/server.pem \\
--key-file=/opt/kubernetes/ssl/server-key.pem \\
--peer-cert-file=/opt/kubernetes/ssl/server.pem \\
--peer-key-file=/opt/kubernetes/ssl/server-key.pem \\
--trusted-ca-file=/opt/kubernetes/ssl/ca.pem \\
--peer-trusted-ca-file=/opt/kubernetes/ssl/ca.pem
#--peer-client-cert-auth=\\"$ETCD_PEER_CLIENT_CERT_AUTH\\
#--client-cert-auth=\\"$ETCD_CLIENT_CERT_AUTH\\" \\
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

5)启动etcd,第一个节点启动时,会卡住,因为连接不到其他的节点,按ctrl + c 退出即可,已经启动完成。

systemctl start etcd
systemctl enable etcd

6)检查etcd集群状态

/opt/kubernetes/bin/etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --endpoints="https://172.16.1.43:2379,https://172.16.1.44:2379,https://172.16.1.45:2379" cluster-health

member 2389474cc6fd9d08 is healthy: got healthy result from https://172.16.1.45:2379
member 5662fbe4b66bbe16 is healthy: got healthy result from https://172.16.1.44:2379
member 9f7ff9ac177a0ffb is healthy: got healthy result from https://172.16.1.43:2379
cluster is healthy

5.部署flannel网络

在两台node节点上部署flannel网络

###5.1 etcd注册网段

1) 写入的 Pod 网段 $CLUSTER_CIDR 必须是 /16 段地址,必须与 kube-controller-manager 的 –cluster-cidr 参数值一致;

etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --endpoints="https://172.16.1.43:2379,https://172.16.1.44:2379,https://172.16.1.45:2379" set /coreos.com/network/config  "Network": "172.17.0.0/16", "Backend": "Type": "vxlan"

2)检查是否注册成功

etcdctl --ca-file=/opt/kubernetes/ssl/ca.pem --cert-file=/opt/kubernetes/ssl/server.pem --key-file=/opt/kubernetes/ssl/server-key.pem --endpoints="https://172.16.1.43:2379,https://172.16.1.44:2379,https://172.16.1.45:2379" get /coreos.com/network/config

5.2 flannel安装

1) 下载解压安装

https://github.com/coreos/flannel/releases
tar xf flannel-v0.11.0-linux-amd64.tar.gz
mv flanneld  mk-docker-opts.sh /opt/kubernetes/bin/

2) 编辑flannel配置文件

vim /opt/kubernetes/cfg/flanneld

FLANNEL_OPTIONS="--etcd-endpoints=https://172.16.1.43:2379,https://172.16.1.44:2379,https://172.16.1.45:2379 -etcd-cafile=/opt/kubernetes/ssl/ca.pem -etcd-certfile=/opt/kubernetes/ssl/server.pem -etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"

3)编辑flannel启动脚本

vim /usr/lib/systemd/system/flanneld.service

[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
 
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

注意:

4) 修改docker的启动脚本

配置Docker启动指定子网 修改EnvironmentFile=/run/flannel/subnet.env,ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS即可

vim /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
 
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
 
[Install]
WantedBy=multi-user.target

5)启动服务

注意启动flannel前要关闭docker及相关的kubelet这样flannel才会覆盖docker0网桥

systemctl daemon-reload
systemctl stop docker
systemctl start flanneld
systemctl enable flanneld
systemctl start docker

6)验证服务

cat /run/flannel/subnet.env
ip a

##6.创建node节点kubeconfig文件

在生成证书的目录下进行

cd /opt/tools/cfssl

6.1 创建TLS Bootstrapping Token

export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d  )
cat > token.csv <<EOF
$BOOTSTRAP_TOKEN,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF

6.2 创建kubelet kubeconfig

KUBE_APISERVER="https://172.16.1.43:6443"

# 设置集群参数
kubectl config set-cluster kubernetes \\
  --certificate-authority=ca.pem \\
  --embed-certs=true \\
  --server=$KUBE_APISERVER \\
  --kubeconfig=bootstrap.kubeconfig

#设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \\
  --token=$BOOTSTRAP_TOKEN \\
  --kubeconfig=bootstrap.kubeconfig

# 设置上下文参数
kubectl config set-context default \\
  --cluster=kubernetes \\
  --user=kubelet-bootstrap \\
  --kubeconfig=bootstrap.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

6.3 创建kube-proxy kubeconfig

kubectl命令在kubernetes-node安装包中

kubectl config set-cluster kubernetes \\
  --certificate-authority=ca.pem \\
  --embed-certs=true \\
  --server=$KUBE_APISERVER \\
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-credentials kube-proxy \\
  --client-certificate=kube-proxy.pem \\
  --client-key=kube-proxy-key.pem \\
  --embed-certs=true \\
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config set-context default \\
  --cluster=kubernetes \\
  --user=kube-proxy \\
  --kubeconfig=kube-proxy.kubeconfig
 
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

完成以后会创建两个配置文件bootstrap.kubeconfig和kube-proxy.kubeconfig,部署node节点时会用到这两个文件

7. 部署master

在master节点安装

下载安装包,将二进制文件放到指定位置

https://github.com/kubernetes/kubernetes/releases

tar xf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
mv kube-apiserver kube-controller-manager kube-scheduler /opt/kubernetes/bin/

7.1安装kube-apiserver

1) 创建apiserver配置文件

vim /opt/kubernetes/cfg/kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=https://172.16.1.43:2379,https://172.16.1.44:2379,https://172.16.1.45:2379 \\
--insecure-bind-address=127.0.0.1 \\
--bind-address=172.16.1.43 \\
--insecure-port=8080 \\
--secure-port=6443 \\
--advertise-address=172.16.1.43 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.10.10.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/kubernetes/ssl/ca.pem \\
--etcd-certfile=/opt/kubernetes/ssl/server.pem \\
--etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"

2) 创建apiserver启动脚本

vim /usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

3) 启动apiserver

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver

[root@k8s-master cfg]# ss -tnlp|grep kube-apiserver
LISTEN     0      16384  172.16.1.43:6443                     *:*                   users:(("kube-apiserver",pid=5487,fd=5))
LISTEN     0      16384  127.0.0.1:8080                     *:*                   users:(("kube-apiserver",pid=5487,fd=3))

7.2 安装kube-scheduler

1) 创建配置文件

vim /opt/kubernetes/cfg/kube-scheduler

KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"

2)创建scheduler启动脚本

vim /usr/lib/systemd/system/kube-scheduler.service

[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

3) 启动scheduler/opt/kubernetes/cfg/kube-controller-manager

systemctl daemon-reload
systemctl enable kube-scheduler.service 
systemctl start kube-scheduler.service

7.3 安装kube-controller-manager

1) 创建配置文件

vim /opt/kubernetes/cfg/kube-controller-manager

KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \\
--v=4 \\
--master=127.0.0.1:8080 \\
--leader-elect=true \\
--address=127.0.0.1 \\
--service-cluster-ip-range=10.10.10.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"

2)创建启动脚本

vim /usr/lib/systemd/system/kube-controller-manager.service

[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
 
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

3 ) 启动服务

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager

安装完成,检查master服务状态

[root@k8s-master cfg]# kubectl get cs,nodes
NAME                                 STATUS    MESSAGE             ERROR
componentstatus/controller-manager   Healthy   ok                  
componentstatus/scheduler            Healthy   ok                  
componentstatus/etcd-1               Healthy   "health":"true"   
componentstatus/etcd-0               Healthy   "health":"true"   
componentstatus/etcd-2               Healthy   "health":"true"   

8.部署node

在所有node节点上

下载安装包

tar zxvf kubernetes-node-linux-amd64.tar.gz
cd kubernetes/node/bin/
cp kube-proxy kubelet /opt/kubernetes/bin/

从master拷贝bootstrap.kubeconfig和kube-proxy.kubeconfig配置文件到所有node节点

从master拷贝相关的证书到所有node节点

8.1 安装kubelet

1)创建bubelet配置文件

vim /opt/kubernetes/cfg/kubelet

KUBELET_OPTS="--logtostderr=true \\
--v=4 \\
--address=172.16.1.44 \\
--hostname-override=172.16.1.44 \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--cert-dir=/opt/kubernetes/ssl \\
--allow-privileged=true \\
--cluster-dns=10.10.10.2 \\
--cluster-domain=cluster.local \\
--fail-swap-on=false \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"

2)创建kubelet启动脚本

vim /usr/lib/systemd/system/kubelet.service

[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
 
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
 
[Install]
WantedBy=multi-user.target

3)将kubelet-bootstrap用户绑定到系统集群角色,否则,启动时会报错“kubelet-bootstrap用户没有权限创建证书”

需要在master节点执行,默认连接localhost:8080端口

kubectl create clusterrolebinding kubelet-bootstrap \\
  --clusterrole=system:node-bootstrapper \\
  --user=kubelet-bootstrap

4)启动bubelet

systemctl enable kubelet
systemctl start kubelet

5)master接受kubelet CSR请求 可以手动或自动 approve CSR 请求。推荐使用自动的方式,因为从 v1.8 版本开始,可以自动轮转approve csr 后生成的证书,如下是手动 approve CSR请求操作方法 查看CSR列表

[root@k8s-master cfssl]# kubectl get csr
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-biBRRPvmJcLrXmUh1WNlStlEzc_BctF8fymNjOl4Wms   2m   kubelet-bootstrap   Pending

接受node

kubectl certificate approve node-csr-biBRRPvmJcLrXmUh1WNlStlEzc_BctF8fymNjOl4Wms

再查看CSR

[root@k8s-master cfssl]# kubectl get csr
NAME                                                   AGE    REQUESTOR           CONDITION
node-csr-biBRRPvmJcLrXmUh1WNlStlEzc_BctF8fymNjOl4Wms   2m   kubelet-bootstrap   Approved,Issued

查看集群node状态

[root@k8s-master cfssl]# kubectl get node
NAME          STATUS   ROLES    AGE    VERSION
172.16.1.44   Ready    <none>   138m   v1.13.0

8.2 安装kube-proxy

1)创建kube-proxy配置文件

vim /opt/kubernetes/cfg/kube-proxy

KUBE_PROXY_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=172.16.1.44 \\
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"

2) 创建kube-proxy启动脚本

vim /usr/lib/systemd/system/kube-proxy.service

[Unit]
Description=Kubernetes Proxy
After=network.target
 
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
 
[Install]
WantedBy=multi-user.target

3)启动buke-proxy

systemctl enable kube-proxy
systemctl start kube-proxy

加入集群后,会在cfg目录下生成配置文件kubelet的配置文件和证书

[root@k8s-node1 cfg]# ls /opt/kubernetes/cfg/kubelet.kubeconfig 
/opt/kubernetes/cfg/kubelet.kubeconfig

[root@k8s-node1 cfg]# ls /opt/kubernetes/ssl/kubelet*
/opt/kubernetes/ssl/kubelet-client-2019-03-30-11-49-33.pem  /opt/kubernetes/ssl/kubelet.crt
/opt/kubernetes/ssl/kubelet-client-current.pem              /opt/kubernetes/ssl/kubelet.key

9.kubectl管理工具

在客户端配置kubectl工具,进行集群管理

1) 将kubectl工具拷贝到客户端

scp kubectl 172.16.1.44:/usr/bin/

2) 将之前创建的admin证书和ca证书拷贝到客户端

scp admin*pem ca.pem 172.16.1.44:/root/kubernetes/

3)设置集群项中名为kubernetes的apiserver地址与根证书

kubectl config set-cluster kubernetes --server=https://172.16.1.43:6443 --certificate-authority=kubernetes/ca.pem

会生成一个配置文件/root/.kube/config

cat .kube/config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /root/kubernetes/ca.pem
    server: https://172.16.1.43:6443
  name: kubernetes
contexts: []
current-context: ""
kind: Config
preferences: 
users: []

4)设置用户项中cluster-admin用户证书认证字段

kubectl config set-credentials cluster-admin --certificate-authority=kubernetes/ca.pem --client-key=kubernetes/admin-key.pem --client-certificate=kubernetes/admin.pem

将admin用户信息添加到了/root/.kube/config文件里

cat .kube/config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /root/kubernetes/ca.pem
    server: https://172.16.1.43:6443
  name: kubernetes
contexts: []
current-context: ""
kind: Config
preferences: 
users:
- name: cluster-admin
  user:
    client-certificate: /root/kubernetes/admin.pem
    client-key: /root/kubernetes/admin-key.pem

5)设置环境项中名为default的默认集群和用户

kubectl config set-context default --cluster=kubernetes --user=cluster-admin

将默认的上下文信息添加到配置文件里/root/.kube/config

cat .kube/config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /root/kubernetes/ca.pem
    server: https://172.16.1.43:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: cluster-admin
  name: default
current-context: ""
kind: Config
preferences: 
users:
- name: cluster-admin
  user:
    client-certificate: /root/kubernetes/admin.pem
    client-key: /root/kubernetes/admin-key.pem

6)设置默认环境项为default

kubectl config use-context default

完整的客户端配置文件

.kube/config 
apiVersion: v1
clusters:
- cluster:
    certificate-authority: /root/kubernetes/ca.pem
    server: https://172.16.1.43:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: cluster-admin
  name: default
current-context: default
kind: Config
preferences: 
users:
- name: cluster-admin
  user:
    client-certificate: /root/kubernetes/admin.pem
    client-key: /root/kubernetes/admin-key.pem

7)测试,在客户端使用kubectl连接集群

[root@k8s-node1 ~]# kubectl get node
NAME          STATUS   ROLES    AGE     VERSION
172.16.1.44   Ready    <none>   6h30m   v1.13.0
172.16.1.45   Ready    <none>   6h10m   v1.13.0

将客户端的证书和配置文件打包,放到其他客户端同样可以使用

10. 安装coreDNS

在安装kubelet时,指定了dns地址是10.10.10.2,但是我们还没有安装dns组件,所有现在创建的pod不能进行正常的域名解析,需要安装dns组件

kubenetes1.13开始默认使用coreDNS来代替kube-dns

1)生成coredns.yml文件

coredns.yaml的模板文件:

https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.base

# __MACHINE_GENERATED_WARNING__

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 
        errors
        health
        kubernetes __PILLAR__DNS__DOMAIN__ in-addr.arpa ip6.arpa 
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
        
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: docker/default
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        beta.kubernetes.io/os: linux
      containers:
      - name: coredns
        image: k8s.gcr.io/coredns:1.3.1
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: __PILLAR__DNS__SERVER__
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP

transforms2sed.sed的文件:

https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/transforms2sed.sed

s/__PILLAR__DNS__SERVER__/$DNS_SERVER_IP/g
s/__PILLAR__DNS__DOMAIN__/$DNS_DOMAIN/g
s/__PILLAR__CLUSTER_CIDR__/$SERVICE_CLUSTER_IP_RANGE/g
s/__MACHINE_GENERATED_WARNING__/Warning: This is a file generated from the base underscore template file: __SOURCE_FILENAME__/g

生成新的配置文件

sed -f transforms2sed.sed coredns.yaml.base > coredns.yaml

完成的配置文件coredns.yaml

# Warning: This is a file generated from the base underscore template file: coredns.yaml.base

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa 
            endpoint http://172.16.1.43:8080   
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
        
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: docker/default
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        beta.kubernetes.io/os: linux
      containers:
      - name: coredns
        image: coredns/coredns:1.3.1
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.10.10.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP

2)部署coreDNS

kubectl create -f coredns.yaml

3)查看状态

kubectl get all -o wide -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE   IP            NODE          NOMINATED NODE   READINESS GATES
pod/coredns-69b995478c-vs46g               1/1     Running   0          50m   172.17.16.2   172.16.1.44   <none>           <none>
pod/kubernetes-dashboard-9bb654ff4-4zmn8   1/1     Running   0          12h   172.17.66.5   172.16.1.45   <none>           <none>

NAME                           TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
service/kube-dns               ClusterIP   10.10.10.2     <none>        53/UDP,53/TCP,9153/TCP   50m     k8s-app=kube-dns
service/kubernetes-dashboard   NodePort    10.10.10.191   <none>        81:45236/TCP             4d17h   app=kubernetes-dashboard

NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE     CONTAINERS             IMAGES                                                                                   SELECTOR
deployment.apps/coredns                1/1     1            1           50m     coredns                coredns/coredns:1.3.1                                                                    k8s-app=kube-dns
deployment.apps/kubernetes-dashboard   1/1     1            1           4d17h   kubernetes-dashboard   registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1   app=kubernetes-dashboard

NAME                                             DESIRED   CURRENT   READY   AGE     CONTAINERS             IMAGES                                                                                   SELECTOR
replicaset.apps/coredns-69b995478c               1         1         1       50m     coredns                coredns/coredns:1.3.1                                                                    k8s-app=kube-dns,pod-template-hash=69b995478c
replicaset.apps/kubernetes-dashboard-9bb654ff4   1         1         1       4d17h   kubernetes-dashboard   registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1   app=kubernetes-dashboard,pod-template-hash=9bb654ff4

4)测试

使用带nslookup的alphine测试

kubectl run dig --rm -it --image=docker.io/azukiapp/dig /bin/sh
----------

/ # cat /etc/resolv.conf 
nameserver 10.10.10.2
search default.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
/ # dig kubernetes.default.svc.cluster.local

; <<>> DiG 9.10.3-P3 <<>> kubernetes.default.svc.cluster.local
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 13605
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;kubernetes.default.svc.cluster.local. IN A

;; ANSWER SECTION:
kubernetes.default.svc.cluster.local. 5	IN A	10.10.10.1

;; Query time: 1 msec
;; SERVER: 10.10.10.2#53(10.10.10.2)
;; WHEN: Thu Apr 04 02:43:18 UTC 2019
;; MSG SIZE  rcvd: 117

/ # dig nginx-service.default.svc.cluster.local

; <<>> DiG 9.10.3-P3 <<>> nginx-service.default.svc.cluster.local
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 24013
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;nginx-service.default.svc.cluster.local. IN A

;; ANSWER SECTION:
nginx-service.default.svc.cluster.local. 5 IN A	10.10.10.176

;; Query time: 0 msec
;; SERVER: 10.10.10.2#53(10.10.10.2)
;; WHEN: Thu Apr 04 02:43:29 UTC 2019
;; MSG SIZE  rcvd: 123

/ # dig www.baidu.com

; <<>> DiG 9.10.3-P3 <<>> www.baidu.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 28619
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 5, ADDITIONAL: 2

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;www.baidu.com.			IN	A

;; ANSWER SECTION:
www.baidu.com.		30	IN	CNAME	www.a.shifen.com.
www.a.shifen.com.	30	IN	A	119.75.217.26
www.a.shifen.com.	30	IN	A	119.75.217.109

;; AUTHORITY SECTION:
a.shifen.com.		30	IN	NS	ns4.a.shifen.com.
a.shifen.com.		30	IN	NS	ns5.a.shifen.com.
a.shifen.com.		30	IN	NS	ns3.a.shifen.com.
a.shifen.com.		30	IN	NS	ns2.a.shifen.com.
a.shifen.com.		30	IN	NS	ns1.a.shifen.com.

;; ADDITIONAL SECTION:
ns4.a.shifen.com.	30	IN	A	14.215.177.229

;; Query time: 3 msec
;; SERVER: 10.10.10.2#53(10.10.10.2)
;; WHEN: Thu Apr 04 02:43:41 UTC 2019
;; MSG SIZE  rcvd: 391

11. master节点高可用

11.1 添加master2节点

1) 在mater1节点上的server.pm证书文件中加入master2节点的ip 172.16.1.46地址,并重新生成server.pm证书

cd /opt/tools/cfssl

vim server-csr.json

    "CN": "kubernetes",
    "hosts": [
    "127.0.0.1",
    "10.10.10.1",
    "172.16.1.43",
    "172.16.1.44",
    "172.16.1.45",
    "172.16.1.46",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
    ],
    "key": 
        "algo": "rsa",
        "size": 2048
    ,
    "names": [
        
            "C": "CN",
            "L": "Beijing",
            "ST": "Beijing",
            "O": "K8s",
            "OU": "System"
        
    ]


2)重新生成server证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server

3)将server.pm和server-key.pm文件拷贝到master和node节点的ssl目录,重启相关服务

4)将master1上的/opt/kubernetes目录和apiserver、scheduler、controller-manager启动脚本拷贝到master2上

scp -r /opt/kubernetes 172.16.1.46:/opt/
scp /usr/lib/systemd/system/kube-apiserver,kube-scheduler,kube-controller-manager.service 172.16.1.46:/usr/lib/systemd/system/
scp /etc/profile.d/kubernetes.sh 172.16.1.46:/etc/profile.d/

5) 在master2上修改kube-apiserver的配置文件,将监听的IP改为master2的

cat kube-apiserver 
KUBE_APISERVER_OPTS="--logtostderr=true \\
--v=4 \\
--etcd-servers=https://172.16.1.43:2379,https://172.16.1.44:2379,https://172.16.1.45:2379 \\
--insecure-bind-address=0.0.0.0 \\
--bind-address=172.16.1.46 \\
--insecure-port=8080 \\
--secure-port=6443 \\
--advertise-address=172.16.1.46 \\
--allow-privileged=true \\
--service-cluster-ip-range=10.10.10.0/24 \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\
--authorization-mode=RBAC,Node \\
--kubelet-https=true \\
--enable-bootstrap-token-auth \\
--token-auth-file=/opt/kubernetes/cfg/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\
--client-ca-file=/opt/kubernetes/ssl/ca.pem \\
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--etcd-cafile=/opt/kubernetes/ssl/ca.pem \\
--etcd-certfile=/opt/kubernetes/ssl/server.pem \\
--etcd-keyfile=/opt/kubernetes/ssl/server-key.pem"

6)在master2上启动服务

systemctl start kube-apiserver
systemctl start kube-scheduler
systemctl start kube-controller-manager

查看日志,有无报错

7)在master2上测试一下

kubectl get node 
NAME          STATUS   ROLES    AGE   VERSION
172.16.1.44   Ready    <none>   10d   v1.13.0
172.16.1.45   Ready    <none>   10d   v1.13.0

11.2 在node节点安装nginx代理

在所有node节点安装nginx,监听本机的6443端口,转发到后端的apiserver的6443端口,将node节点的kubelet连接的apiserver地址改为127.0.0.1:6443,实现master节点高可用

1)安装nginx

yum install nginx -y

2)修改nginx配置文件

vim nginx.conf

user nginx nginx;
worker_processes 8;
 
pid /usr/local/nginx/logs/nginx.pid;

worker_rlimit_nofile 51200;
events

    use epoll;
    worker_connections 65535;

 
stream 
	upstream k8s-apiserver 
		server 172.16.1.43:6443;
		server 172.16.1.46:6443;
	
	server 
		listen 127.0.0.1:6443;
		proxy_pass k8s-apiserver;
	



3)启动nginx

/etc/init.d/nginx start
chkconfig nginx on

4)修改node节点所有组件中apiserver地址

cd /opt/kubernetes/cfg

ls *config |xargs -i sed -i s/172.16.1.43/127.0.0.1/ 

5)重启kubelet和kube-proxy

systemctl restart kubelet
systemctl restart kube-proxy

查看日志,有无报错

6)在master节点查看,node节点的状态

kubectl get node
NAME          STATUS   ROLES    AGE   VERSION
172.16.1.44   Ready    <none>   10d   v1.13.0
172.16.1.45   Ready    <none>   10d   v1.13.0

正常,到此,master的高可用已经完成

以上是关于k8s单节点和多节点部署的主要内容,如果未能解决你的问题,请参考以下文章

K8S——单master节点和基于单master节点的双master节点二进制部署(本机实验,防止卡顿,所以多master就不做3台了)

K8S——单master节点和基于单master节点的双master节点二进制部署(本机实验,防止卡顿,所以多master就不做3台了)

K8S二进制部署---单节点master

k8s部署nacos

[K8s] Docker 单节点部署 Rancher

k8s单节点集群二进制部署(步骤详细,图文详解)