云原生:二进制部署单master k8s集群

Posted 键客李大白

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了云原生:二进制部署单master k8s集群相关的知识,希望对你有一定的参考价值。

一、部署说明

1.1 主机清单

IP地址 主机名 角色 部署组件 描述
192.168.2.10 lidabai-master master etcd、kube-apiserver、kube-scheduler、kube-controller-manager、kubectl、kubelet、kube-proxy
192.168.2.11 lidabai-node1 node kubctl、kube-proxy

1.2 组件版本

etcd 3.4.16

kubernetes 1.20.15(1.20最后一个版本)

docker-ce 19.03.8

1.3 网段划分

service IP网段:10.96.0.0/12

Pod IP网段:10.244.0.0/16

clusterCIDR: 10.244.0.0/16

10.96.0.1:Service网段的首个IP地址;

10.96.0.10:CoreDNS服务IP;

10.244.0.1:Pod网段的首个IP地址;

--service-cluster-ip-range=10.96.0.0/12
--cluster-cidr=10.244.0.0/16

二、主机初始化

禁用swap交换分区

$ sed -ri /^[^#]*swap/s/^/#/   /etc/fstable  &&  swapoff  -a

配置工作目录

$ mkdir -p /etc/kubernetes/pki/etcd           #存放kubernetes组件证书文件
$ mkdir    /etc/kubernetes/conf           #存放配置文件
$ mkdir    /var/log/kubernetes/          #日志文件存放路径
$ mkdir    /etc/kubernetes/plugins     #存放集群插件的资源清单文件(calico、coredns、m

配置hosts文件

每台主机均操作

修改机器的/etc/hosts文件

$ cat >> /etc/hosts << EOF
192.168.2.10 lidabai-master
192.168.2.11 lidabai-node1
EOF

配置时间同步

服务端

$ yum install chrony -y
$ vim /etc/chrony.conf
server 127.127.1.0 iburst         #表示与本机IP同步时间,其他server注释或删除
allow 192.168.2.0/24   #指定一台主机、子网,或者网络以允许或拒绝NTP连接到扮演时钟服务器的机器
local stratum 10    #不去同步任何人的时间。时间同步服务级别
$ systemctl restart chronyd  && systemctl enable chronyd && systemctl status  chronyd

客户端

# yum install chrony -y
# vim /etc/chrony.conf
server  192.168.2.101 iburst
# systemctl restart chronyd   #做好服务端的在重启
# systemctl enable chronyd
# chronyc sources              #查看同步状态^*正常
chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 192.168.2.201                4   6   377    48  +1118us[+4139us] +/-   18ms
# date                             #同时在所有主机敲date命令查看时间是否一致

补充:同步网络时间

$ yum install -y ntpdate
$ ntpdate time2.aliyun.com

配置limits参数

$ cat <<EOF > 

EOF

升级内核版本(rpm)

CentOS需要升级内核至4.18+,本处升级到4.19.12,所有主机均操作

(1)查看当前内核版本

$ uname -r        #查看当前内核版本

(2)安装启动项

$ grub2-install /dev/sda   #安装启动项(可选操作)

(3)安装内核

$ wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
$ yum install -y kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm

(4)重新生成启动配置

#将GRUB_DEFAULT=saved改为0
$ sed -i /GRUB_DEFAULT/s/saved/0/ /etc/default/grub  &&  grep GRUB_DEFAULT   /etc/default/grub

(5)重新编译内核启动文件

每一次升级完都要执行一次

$ grub2-mkconfig -o /boot/grub2/grub.cfg

(6)查看默认启动的内核

$ awk -F\\ $1=="menuentry " print i++ " : " $2 /etc/grub2.cfg
0 : CentOS Linux (4.19.12-1.el7.elrepo.x86_64) 7 (Core) 
1 : CentOS Linux (3.10.0-862.el7.x86_64) 7 (Core)
2 : CentOS Linux (0-rescue-26c887a0b76f40d4be5e70af41f8af2f) 7 (Core)

(7)重启主机

$ reboot
$ uname -r   #查看内核版本是否升级成功

修改内核参数

每台主机均操作

需要升级内核完后操作!

$ cat >> /etc/sysctl.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
$ sysctl -p

net.bridge.bridge-nf-call-iptables:开启桥设备内核监控(ipv4)

net.ipv4.ip_forward:开启路由转发(必须

net.bridge.bridge-nf-call-ip6tables:开启桥设备内核监控(ipv6)

加载ipvs模块

如果==Kube-Proxy==组件启用IPVS模式,则需要在主机加载ipvs模块!

$ yum install  -y ipvsadm ipset sysstat conntrack libseccomp  
$ cat <<EOF > /etc/sysconfig/modules/ipvs.modules 
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack ip_tables ip_set xt_set ipt_set ipt_rpfilter ipt_REJECT ipip "
for kernel_module in \\$ipvs_modules; do
  /sbin/modinfo -F filename \\$kernel_module > /dev/null 2>&1
  if [ $? -eq 0 ]; then
    /sbin/modprobe \\$kernel_module
  fi
done
EOF  
$ chmod 755 /etc/sysconfig/modules/ipvs.modules 
$ sh /etc/sysconfig/modules/ipvs.modules 
$ lsmod | grep ip_vs

安装docker-ce

$ wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
$ yum clean all && yum makecache
$ yum install -y docker-ce
$ systemctl enable docker --now
$ docker --version

修改docker源和驱动

$ cat > /etc/docker/daemon.json << EOF

    "exec-opts": ["native.cgroupdriver=systemd"],
    "registry-mirrors": [
        "http://hub-mirror.c.163.com",
        "https://registry.docker-cn.com"
    ],
   "log-driver": "json-file",
   "log-opts": 
           "max-size": "500m",
           "max-file": "2" 
        

EOF
$ systemctl restart docker

下载依赖镜像

​ 需要下载pause根容器等镜像。

$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2
$ docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2

安装cfssl工具

使用cfsll工具生成集群需要的证书文件,其中一台master节点安装即可!

$ wget https://github.com/cloudflare/cfssl/releases/download/v1.6.0/cfssl_1.6.0_linux_amd64  -O   /usr/local/bin/cfssl
$ wget https://github.com/cloudflare/cfssl/releases/download/v1.6.0/cfssljson_1.6.0_linux_amd64 -O  /usr/local/bin/cfssljson
$ wget https://github.com/cloudflare/cfssl/releases/download/v1.6.0/cfssl-certinfo_1.6.0_linux_amd64   -O  /usr/local/bin/cfssl-certinfo 
$ chmod +x  /usr/local/bin/cfssl*

cfssljson:将从cfssl和multirootca等获得的json格式的输出转化为证书格式的文件(证书,密钥,CSR和bundle)进行存储;

cfssl-certinfo:可显示CSR或证书文件的详细信息;可用于证书校验。

下载kubernetes二进制安装包

下载kubernetes的二进制安装包,解压后将二进制可执行文件移动到/usr/local/bin/下!

$ wget https://dl.k8s.io/v1.20.15/kubernetes-server-linux-amd64.tar.gz
$ tar -zxvf kubernetes-server-linux-amd64.tar.gz
$ mv  kubernetes/server/bin/kube-apiserver,kube-controller-manager,kube-scheduler,kubectl,kubelet,kube-proxy  /usr/local/bin/
$ ls /usr/local/bin/
kube-apiserver  kube-controller-manager  kubectl  kubelet  kube-proxy  kube-scheduler

三、创建CA证书机构

自建CA证书颁发机构,向kubernetes的组件颁发证书!

3.1 创建配置文件

$ cfssl print-defaults  config > ca-config.json            #生成默认配置文件
$ cat <<EOF > ca-config.json

    "signing": 
        "default": 
            "expiry": "87600h"
        ,
        "profiles": 
            "kubernetes": 
                "expiry": "87600h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth",
                    "client auth"
                ]
            
        
    

EOF

default.expiry:默认证书有效期(单位:h)

profiles.kubernetes:为服务使用该配置文件颁发证书的配置模块;

signing:签署,表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE;

key encipherment:密钥加密;

profiles:指定了不同角色的配置信息;可以定义多个 profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个 profile。

server auth:服务器身份验证;表示 client 可以用该 CA 对 server 提供的证书进行验证;

client auth:客户端身份验证;表示 server 可以用该 CA 对 client 提供的证书进行验证;

3.2 生成并配置csr请求文件

类似于申请表,表中填写申请者的信息(证书签名请求文件)

$ cfssl  print-defaults csr  > ca-csr.json
$ cat <<EOF > ca-csr.json

    "CN": "kubernetes",
    "key": 
        "algo": "rsa",
        "size": 2048
    ,
    "names": [
        
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "system"
        
    ]

EOF

hosts:包含的授权范围,不在此范围的的节点或者服务使用此证书就会报证书不匹配错误,证书如果不包含可能会出现无法连接的情况;

Key: 指定使用的加密算法,一般使用rsa非对称加密算法(algo:rsa;size:2048)

CN:Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法;CN是域名,也就是你现在使用什么域名就写什么域名。

C:国家(CN中国)

ST:类似省份(如湖南省等)

L:城市(如北京市)

O:Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);

3.3 创建ca证书

创建CA证书并放入/etc/kubernetes/pki/

$ cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca
2022/07/17 18:54:18 [INFO] generating a new CA key and certificate from CSR
2022/07/17 18:54:18 [INFO] generate received request
2022/07/17 18:54:18 [INFO] received CSR
2022/07/17 18:54:18 [INFO] generating key: rsa-2048
2022/07/17 18:54:18 [INFO] encoded CSR
2022/07/17 18:54:18 [INFO] signed certificate with serial number 295353362230393697370845787617732442107792012186
$ ls  /etc/kubernetes/pki/
ca.csr  ca-key.pem  ca.pem

四、部署etcd服务

4.1 颁发etcd证书

1)配置etcd请求csr文件

$ cfssl  print-defaults csr  > etcd-csr.json
$ cat <<EOF > etcd-csr.json

    "CN": "etcd",
    "hosts": [
        "127.0.0.1",
        "192.168.2.10"
    ],
    "key": 
        "algo": "rsa",
        "size": 2048
    ,
    "names": [
        
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "system"
        
    ]

EOF

2)生成etcd证书

生成etcd证书并放入到/etc/kubernetes/pki/etcd

$ cfssl  gencert  -ca=/etc/kubernetes/pki/ca.pem  \\
 -ca-key=/etc/kubernetes/pki/ca-key.pem \\
 -config=ca-config.json  -profile=kubernetes  \\
 etcd-csr.json  |  cfssljson -bare  /etc/kubernetes/pki/etcd/etcd

2021/11/09 17:48:29 [INFO] generate received request
2021/11/09 17:48:29 [INFO] received CSR
2021/11/09 17:48:29 [INFO] generating key: rsa-2048
2021/11/09 17:48:29 [INFO] encoded CSR
2021/11/09 17:48:29 [INFO] signed certificate with serial number 489355794923657854134345592908215568442583798531
$ ls /etc/kubernetes/pki/etcd/
etcd.csr  etcd-key.pem  etcd.pem

-ca-key:指定CA证书机构的私钥;

-config:指定CA证书策略;

-profile:指定使用CA证书策略中的哪个模块;

etcd.pem:公钥

etcd-key.pem:私钥

4.2 下载etcd二进制包

$ wget https://github.com/etcd-io/etcd/releases/download/v3.4.16/etcd-v3.4.16-linux-amd64.tar.gz
$ tar -xf etcd-v3.4.16-linux-amd64.tar.gz 
$ cp -p  etcd-v3.4.16-linux-amd64/etcd,etcdctl  /usr/local/bin/
$ ls /usr/local/bin/etcd*
/usr/local/bin/etcd  /usr/local/bin/etcdctl

4.3 创建etcd配置文件

$ cat  /etc/kubernetes/conf/etcd.conf
#[Cluster tag]
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.2.10:2380"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.2.10:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.2.10:2379"
#[Member tag]
ETCD_NAME="etcd-1"
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_HEARTBEAT_INTERVAL="100"
ETCD_ELECTION_TIMEOUT="1000"
ETCD_LISTEN_PEER_URLS="https://192.168.2.10:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.2.10:2379,http://127.0.0.1:2379"
#[Safety mark]
ETCD_CLIENT_CERT_AUTH=true
ETCD_PEER_CERT_FILE="/etc/kubernetes/pki/etcd/etcd.pem"
ETCD_PEER_KEY_FILE="/etc/kubernetes/pki/etcd/etcd-key.pem"
ETCD_PEER_CLIENT_CERT_AUTH="true"
ETCD_PEER_TRUSTED_CA_FILE="/etc/kubernetes/pki/ca.pem"
ETCD_PEER_AUTO_TLS="true"
ETCD_CERT_FILE="/etc/kubernetes/pki/etcd/etcd.pem"
ETCD_KEY_FILE="/etc/kubernetes/pki/etcd/etcd-key.pem"
ETCD_CLENT_CERT_AUTH="true"
ETCD_TRUSTED_CA_FILE="/etc/kubernetes/pki/ca.pem"
ETCD_AUTO_TLS="true"

4.4 创建服务启动文件

$ cat <<EOF > /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
EnvironmentFile=/etc/kubernetes/conf/etcd.conf
WorkingDirectory=/var/lib/etcd/
ExecStart=/usr/local/bin/etcd    
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
EOF

WorkingDirectory:etcd工作目录;

EnvironmentFile:指定etcd配置文件;

4.5 启动etcd集群

1)创建工作目录

$ mkdir -p /var/lib/etcd  #权限应该为-rwx------,ETCD_DATA_DIR参数指定的值

2)重新加载服务配置

$ systemctl daemon-reload

3)启动etcd服务

$ systemctl start etcd.service
$ systemctl enable etcd.service
$ systemctl status etcd

4.6 查看集群状态

$ etcdctl endpoint health --write-out=table --endpoints=https://192.168.2.10:2379 \\
--cacert=/etc/kubernetes/pki/ca.pem \\
--cert=//etc/kubernetes/pki/etcd/etcd.pem  \\
--key=/etc/kubernetes/pki/etcd/etcd-key.pem 
+---------------------------+--------+------------+-------+
|        ENDPOINT      | HEALTH |   TOOK   | ERROR |
+---------------------------+--------+------------+-------+
| https://192.168.2.10:2379 |  true | 5.500499ms |     |
+---------------------------+--------+------------+-------+
  • --write-out=table:以表格形式输出结果
  • --cacert: 指定CA证书

  • --cert:指定etcd服务证书

  • --key:指定etcd私钥

  • --endpoints:指定集群地址

五、部署kube-apiserver服务

5.1 颁发kube-apiserver证书

==CA证书向kube-apiserver(客户端)颁发证书==。

1)创建api-server csr请求文件

$ cfssl  print-defaults csr  > kube-apiserver-csr.json
$ cat <<EOF > kube-apiserver-csr.json

    "CN": "kubernetes",
    "hosts": [
        "127.0.0.1",
        "192.168.2.10",
        "192.168.2.11",
        "10.96.0.1",
        "10.244.0.1",
        "kubernetes",
        "kubernetes.default",
        "kubernetes.default.svc",
        "kubernetes.default.svc.cluster",
        "kubernetes.default.svc.cluster.local"
    ],
    "key": 
        "algo": "rsa",
        "size": 2048
    ,
    "names": [
        
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "k8s",
            "OU": "system"
        
    ]

EOF

127.0.0.1: 本地回环地址;

10.96.0.1:Service网段的首个IP地址;

10.244.0.1:Pod网段的首个IP地址;

2)生成api-server证书

$ cfssl gencert  -ca=/etc/kubernetes/pki/ca.pem  \\
 -ca-key=/etc/kubernetes/pki/ca-key.pem \\
 -config=ca-config.json  \\
 -profile=kubernetes  kube-apiserver-csr.json  | cfssljson  -bare /etc/kubernetes/pki/kube-apiserver

5.2 生成kube-apiserver聚合证书

metrics服务需要用到

front-proxy-ca.pem

1)创建请求文件

$ cat > front-proxy-ca-csr.json  << EOF 

  "CN": "kubernetes",
  "key": 
     "algo": "rsa",
     "size": 2048
  ,
  "ca": 
    "expiry": "87600h"
  

EOF

2)创建CA机构

$ cfssl gencert -initca  front-proxy-ca-csr.json  | cfssljson  -bare  /etc/kubernetes/pki/front-proxy-ca
$ ls /etc/kubernetes/pki/front-proxy-ca*
front-proxy-ca.csr  front-proxy-ca-key.pem   front-proxy-ca.pem

3)创建client请求文件

$ cat > front-proxy-client-csr.json  << EOF 

  "CN": "front-proxy-client",
  "key": 
     "algo": "rsa",
     "size": 2048
  

EOF

4)颁发证书

$ cfssl gencert -ca=/etc/kubernetes/pki/front-proxy-ca.pem  \\
-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem \\
-config=ca-config.json   \\
-profile=kubernetes  front-proxy-client-csr.json | cfssljson -bare  /etc/kubernetes/pki/front-proxy-client

5.3 创建token.csv文件

格式:token,用户名,UID,用户组

$ cat  << EOF > /etc/kubernetes/token.csv
$(head -c 16 /dev/urandom | od -An -t x | tr -d  ),kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
$ cat  /etc/kubernetes/token.csv
5b6ba69b9aab4600407f9bf2c157fefa,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

启动TLS Bootstrapping 机制

​ Master apiserver启用TLS认证后,每个节点的 kubelet 组件都要使用由 apiserver 使用的 CA 签发的有效证书才能与 apiserver 通讯,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。

​ 为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。

​ Bootstrap 是很多系统中都存在的程序,比如 Linux 的bootstrap,bootstrap 一般都是作为预先配置在开启或者系统启动的时候加载,这可以用来生成一个指定环境。Kubernetes 的 kubelet 在启动时同样可以加载一个这样的配置文件,这个文件的内容类似如下形式:

apiVersion: v1
clusters: null
contexts:
- context:
   cluster: kubernetes
   user: kubelet-bootstrap
  name: default
current-context: default
kind: Config
preferences: 
users:
 - name: kubelet-bootstrap
 user: 
  • ==TLS bootstrapping 具体引导过程==

RBAC 作用
当 TLS 解决了通讯问题后,那么权限问题就应由 RBAC 解决(可以使用其他权限模型,如 ABAC);RBAC 中规定了一个用户或者用户组(subject)具有请求哪些 api 的权限;在配合 TLS 加密的时候,实际上 apiserver 读取客户端证书的 CN 字段作为用户名,读取 O字段作为用户组.

以上说明:

  • 第一,想要与 apiserver 通讯就必须采用由 apiserver CA 签发的证书,这样才能形成信任关系,建立 TLS 连接;
  • 第二,可以通过证书的 CN、O 字段来提供 RBAC 所需的用户与用户组。

    kubelet 首次启动流程
    TLS bootstrapping 功能是让 kubelet 组件去 apiserver 申请证书,然后用于连接 apiserver;那么第一次启动时没有证书如何连接 apiserver ?

​ 在apiserver 配置中指定了一个 token.csv 文件,该文件中是一个预设的用户配置;同时该用户的Token 和 由apiserver 的 CA签发的用户被写入了 kubelet 所使用的 bootstrap.kubeconfig 配置文件中;这样在首次请求时,kubelet 使用 bootstrap.kubeconfig 中被 apiserver CA 签发证书时信任的用户来与 apiserver 建立 TLS 通讯,使用 bootstrap.kubeconfig 中的用户 Token 来向 apiserver 声明自己的 RBAC 授权身份.
token.csv格式:

​ 首次启动时,可能与遇到 kubelet 报 401 无权访问 apiserver 的错误;这是因为在默认情况下,kubelet 通过 bootstrap.kubeconfig 中的预设用户 Token 声明了自己的身份,然后创建 CSR 请求;但是不要忘记这个用户在我们不处理的情况下他没任何权限的,包括创建 CSR 请求;

​ 所以需要创建一个 ClusterRoleBinding,将预设用户 kubelet-bootstrap 与内置的 ClusterRole system:node-bootstrapper 绑定到一起,使其能够发起 CSR 请求。

5.4 创建kube-apiserver配置文件

$ cat <<EOF > /etc/kubernetes/kube-apiserver.conf
KUBE_APISERVER_OPTS="\\
--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota

--anonymous-auth=false \\
--bind-address=192.168.2.10 \\ 
--secure-port=6443 \\ 
--insecure-port=0 \\
--advertise-address=192.168.2.10 \\ 
--authorization-mode=Node,RBAC \\ 
--runtime-config=api/all=true \\
--enable-bootstrap-token-auth \\
--service-cluster-ip-range=10.255.0.0/16 \\
--token-auth-file=/etc/kubernetes/token.csv \\
--service-node-port-range=30000-50000 \\
--tls-cert-file=/etc/kubernetes/pki/kube-apiserver.pem  \\ 
--tls-private-key-file=/etc/kubernetes/pki/kube-apiserver-key.pem \\
--client-ca-file=/etc/kubernetes/pki/ca.pem \\ 
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \\ 
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \\  
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \\
--service-account-signing-key-file=/etc/kubernetes/pki/ca-key.pem  \\ 
--service-account-issuer=https://kubernetes.default.svc.cluster.local \\

--allow-privileged=true \\ 
--apiserver-count=3 \\ 
--audit-log-maxage=30 \\ 
--audit-log-maxbackup=3 \\ 
--audit-log-maxsize=100 \\ 
--audit-log-path=/var/log/kube-apiserver-audit.log \\ 
--event-ttl=1h \\ 
--alsologtostderr=true \\ 
--logtostderr=false \\
--log-dir=/etc/kubernetes/logs/ \\
--v=4"
EOF 

参数说明:

【集群配置部分】

【etcd部分】
--etcd-servers=https://192.168.2.10:2379 \\  #指定etcd集群连接地址
--etcd-cafile=/etc/kubernetes/pki/ca.pem \\  #CA证书
--etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem \\  
--etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem \\   #etcd的私钥
【聚合认证部分】
--requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \\
--proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \\
--proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \\
--requestheader-allowed-names=aggregator  \\
--requestheader-group-headers=X-Remote-Group  \\
--requestheader-extra-headers-prefix=X-Remote-Extra-  \\
--requestheader-username-headers=X-Remote-User \\
--enable-aggregator-routing=true

【日志配置部分】

k8s目前提供两种日志后端,Log后端和webhook后端,Log后端可以将日志输出到文件,webhook后端将日志发送到远端日志服务器

--allow-privileged=true \\
--apiserver-count=3 \\ 
--event-ttl=1h \\ 

--audit-log-maxage=30 \\   #定义了保留旧审计日志文件的最大天数
--audit-log-maxbackup=3 \\  #定义了要保留的审计日志文件的最大数量
--audit-log-maxsize=100 \\  #定义审计日志文件的最大大小(兆字节)
--audit-log-path=/var/log/kube-apiserver-audit.log \\  #定用来写入审计事件的日志文件路径。不指定此标志会禁用日志后端。- 意味着标准化输出
--audit-log-format=json \\ #指定最终审计日志的格式为json,(默认为json);
--alsologtostderr=true \\ 
--logtostderr=false \\
--log-dir=/etc/kubernetes/logs/ \\
--v=4"

[可以启动]:

$ cat  /etc/kubernetes/conf/kube-apiserver.conf 
KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \\
  --anonymous-auth=false \\
  --bind-address=192.168.2.10 \\
  --secure-port=6443 \\
  --advertise-address=192.168.2.10 \\
  --insecure-port=0 \\
  --authorization-mode=Node,RBAC \\
  --runtime-config=api/all=true \\
  --enable-bootstrap-token-auth \\
  --service-cluster-ip-range=10.96.0.0/12 \\
  --token-auth-file=/etc/kubernetes/token.csv \\
  --service-node-port-range=30000-50000 \\
  --tls-cert-file=/etc/kubernetes/pki/kube-apiserver.pem  \\
  --tls-private-key-file=/etc/kubernetes/pki/kube-apiserver-key.pem \\
  --client-ca-file=/etc/kubernetes/pki/ca.pem \\
  --kubelet-client-certificate=/etc/kubernetes/pki/kube-apiserver.pem \\
  --kubelet-client-key=/etc/kubernetes/pki/kube-apiserver-key.pem \\
  --service-account-key-file=/etc/kubernetes/pki/ca-key.pem \\
  --service-account-signing-key-file=/etc/kubernetes/pki/ca-key.pem  \\
  --service-account-issuer=https://kubernetes.default.svc.cluster.local \\
  --etcd-servers=https://192.168.2.10:2379 \\
  --etcd-cafile=/etc/kubernetes/pki/ca.pem \\
  --etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem \\
  --etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem \\
  --enable-swagger-ui=true \\
  --enable-aggregator-routing=true \\
  --requestheader-allowed-names=front-proxy-client \\
  --requestheader-username-headers=X-Remote-User \\
  --requestheader-group-headers=X-Remote-Group \\
  --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem \\
  --requestheader-extra-headers-prefix=X-Remote-Extra- \\
  --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem \\
  --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem \\
  --allow-privileged=true \\
  --apiserver-count=3 \\
  --audit-log-maxage=30 \\
  --audit-log-maxbackup=3 \\
  --audit-log-maxsize=100 \\
  --audit-log-path=/var/log/kube-apiserver-audit.log \\
  --event-ttl=1h \\
  --alsologtostderr=true \\
  --logtostderr=false \\
  --log-dir=/var/log/kubernetes \\
  --v=4"

--requestheader-allowed-names: 指定聚合证书CN的值(请求文件)

5.5 创建kube-apiserver服务启动文件

$ cat > /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
Wants=etcd.service

[Service]
ExecStart=/usr/local/bin/kube-apiserver  $KUBE_APISERVER_OPTS
EnvironmentFile=-/etc/kubernetes/conf/kube-apiserver.conf
Restart=on-failure
RestartSec=10s
Type=notify
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

5.6 启动kube-apiserver服务

$ systemctl daemon-reload   #重新加载服务配置
$ systemctl enable --now kube-apiserver.service 
$ systemctl status kube-apiserver.service  -l

5.7 服务测试

$ curl --insecure https://192.168.2.10:6443/

  "kind": "Status",
  "apiVersion": "v1",
  "metadata": 

  ,
  "status": "Failure",
  "message": "Unauthorized",
  "reason": "Unauthorized",
  "code": 401

有返回说明启动正常! 上面看到401,这个是正常的的状态,还没认证。

六、部署kubectl组件

Kubectl是客户端工具,操作k8s资源的,如增、删、改、查等。

​ Kubectl操作资源的时候,怎么知道连接到哪个集群,需要一个文件/etc/kubernetes/admin.conf,kubectl会根据这个文件的配置,去访问k8s资源。/etc/kubernetes/admin.con文件记录了访问的k8s集群和用到的证书。

6.1 配置证书

1)创建csr请求文件

$ cfssl  print-defaults csr  > admin-csr.json
$ cat <<EOF > admin-csr.json

    "CN": "admin",
    "hosts": [],
    "CN": "admin",
    "hosts": [],
    "key": 
        "algo": "rsa",
        "size": 2048
    ,
    "names": [
        
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "system:masters",
            "OU": "system"
        
    ]

EOF

说明: 后续 kube-apiserver 使用 RBAC 对客户端(如 kubelet、kube-proxy、Pod)请求进行授权;

kube-apiserver 预定义了一些 RBAC 使用的 RoleBindings,如 cluster-admin 将 Group system:masters 与 Role cluster-admin 绑定,该 Role 授予了调用kube-apiserver 的所有 API的权限;

O指定该证书的 Group 为 system:masters,kubelet 使用该证书访问 kube-apiserver 时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的 system:masters,所以被授予访问所有 API 的权限;

证书O配置为system:masters 在集群内部cluster-admin的clusterrolebinding将system:masters组和cluster-admin clusterrole绑定在一起

2)生成证书

$ cfssl gencert -ca=/etc/kubernetes/pki/ca.pem \\
-ca-key=/etc/kubernetes/pki/ca-key.pem  -config=ca-config.json \\
-profile=kubernetes admin-csr.json | cfssljson -bare  /etc/kubernetes/pki/admin
$  ls  /etc/kubernetes/pki/admin*
admin.csr  admin-key.pem  admin.pem

6.2 创建kubeconfig文件

1)设置一个集群项

​ 设置集群参数

$ kubectl config set-cluster kubernetes --embed-certs=true \\
--certificate-authority=/etc/kubernetes/pki/ca.pem \\
--server=https://192.168.2.10:6443  \\
--kubeconfig=/etc/kubernetes/kube.config
$ cat /etc/kubernetes/kube.config                #查看生成的文件内容

2)设置一个用户项

​ 设置客户端认证参数

$ kubectl  config set-credentials  admin --embed-certs=true\\
--client-certificate=/etc/kubernetes/pki/admin.pem  \\
--client-key=/etc/kubernetes/pki/admin-key.pem \\
--kubeconfig=/etc/kubernetes/kube.config
$ cat /etc/kubernetes/kube.config   #查看文件内容与之前有何变化?

3)设置一个环境项

​ 设置上下文参数

$ kubectl  config  set-context kubernetes \\
--cluster=kubernetes \\
--user=admin  \\
--kubeconfig=/etc/kubernetes/kube.config
$ cat  /etc/kubernetes/kube.config
...
contexts:
- context:
    cluster: kubernetes
    user: admin
  name: kubernetes
current-context: ""
...

4)设置当前上下文

设置当前环境为默认的环境(使用某个环境当作默认环境)

$ kubectl config use-context kubernetes --kubeconfig=/etc/kubernetes/kube.config

6.3 设置环境变量KUBECONFIG

这样在操作kubectl,就会自动加载KUBECONFIG来操作要管理哪个集群的k8s资源了

也可以按照下面方法,这个是在kubeadm初始化k8s的时候会告诉我们要用的一个方法

$ cp -i /etc/kubernetes/admin.conf /root/.kube/config

这样我们在执行kubectl,就会加载/root/.kube/config文件,去操作k8s资源了

如果设置了KUBECONFIG,那就会先找到KUBECONFIG去操作k8s,如果没有KUBECONFIG变量,那就会使用/root/.kube/config文件决定管理哪个k8s集群的资源

$ mkdir -p  ~/.kube
$ cp -i /etc/kubernetes/kube.config  ~/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ echo "export KUBECONFIG=/etc/kubernetes/kube.config" >> /etc/profile
$ source /etc/profile
$ echo $KUBECONFIG
/etc/kubernetes/kube.config
$ kubectl get nodes
No resources found

6.4 授权kubernetes证书访问kubelet api权限

$ kubectl create clusterrolebinding kube-apiserver:kubelet-apis \\
--clusterrole=system:kubelet-api-admin --user kubernetes

6.5 查看集群组件状态

$ kubectl cluster-info
Kubernetes control plane is running at https://192.168.2.10:6443

To further debug and diagnose cluster problems, use kubectl cluster-info dump.

$  kubectl get componentstatuses
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME       STATUS    MESSAGE                   ERROR
scheduler    Unhealthy  Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
etcd-0         Healthy     "health":"true"   
$ kubectl get all -A
NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     service/kubernetes   ClusterIP   10.255.0.1   <none>        443/TCP   14h

6.6 kubectl备忘单(tab键)

官方:https://kubernetes.io/zh/docs/reference/kubectl/cheatsheet/

设置Kubectl 自动补全

$ yum install -y bash-completion   #要先安装 bash-completion 包
$ source <(kubectl completion bash) 
$ echo "source <(kubectl completion bash)" >> ~/.bashrc   # 在您的 bash shell 中永久的添加自动补全

七、部署kube-controller-manager

组件介绍:https://kubernetes.io/zh-cn/docs/reference/command-line-tools-reference/kube-controller-manager/

7.1 颁发controller-manager证书

1)创建请求文件

$ cfssl  print-defaults csr  > kube-controller-manager-csr.json
$ cat <<EOF > kube-controller-manager-csr.json

    "CN": "system:kube-controller-manager",
    "hosts": [
        "127.0.0.1",
        "192.168.2.10"
    ],
    "key": 
        "algo": "rsa",
        "size": 2048
    ,
    "names": [
        
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "system:kube-controller-manager",
            "OU": "system"
        
    ]

EOF

​ hosts: 填写master节点的IP地址。

2)生成证书

$ cfssl gencert -ca=/etc/kubernetes/pki/ca.pem \\
-ca-key=/etc/kubernetes/pki/ca-key.pem \\
-config=ca-config.json \\
-profile=kubernetes kube-controller-manager-csr.json | cfssljson \\
-bare  /etc/kubernetes/pki/kube-controller-manager

$ ls  /etc/kubernetes/pki/kube-controller-manager*
/etc/kubernetes/pki/kube-controller-manager.csr      /etc/kubernetes/pki/kube-controller-manager.pem
/etc/kubernetes/pki/kube-controller-manager-key.pem

7.2 创建kubeconfig文件

1)设置一个集群项

设置集群参数

$ kubectl config set-cluster kubernetes --embed-certs=true \\
--certificate-authority=/etc/kubernetes/pki/ca.pem  \\
--server=https://192.168.2.10:6443 \\
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig
$ cat /etc/kubernetes/kube-controller-manager.kubeconfig

2)设置一个用户项

设置客户端认证参数

$ kubectl config set-credentials system:kube-controller-manager \\
 --embed-certs=true \\
--client-certificate=/etc/kubernetes/pki/kube-controller-manager.pem \\
--client-key=/etc/kubernetes/pki/kube-controller-manager-key.pem  \\
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig
$ cat /etc/kubernetes/kube-controller-manager.kubeconfig    #查看文件内容与之前有何变化

3)设置一个环境项

设置上下文参数

$ kubectl config set-context system:kube-controller-manager \\
--cluster=kubernetes --user=system:kube-controller-manager \\
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig

4)设置默认上下文

$ kubectl config use-context system:kube-controller-manager \\
--kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig

7.3 创建配置文件

$ cat /etc/kubernetes/conf/kube-controller-manager.conf
KUBE_CONTROLLER_MANAGER_OPTS="--port=0 \\
  --secure-port=10252 \\
  --bind-address=127.0.0.1 \\
  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\
  --service-cluster-ip-range=10.96.0.0/12 \\
  --cluster-name=kubernetes \\
  --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\
  --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\
  --allocate-node-cidrs=true \\
  --cluster-cidr=10.244.0.0/16 \\
  --experimental-cluster-signing-duration=87600h \\
  --root-ca-file=/etc/kubernetes/pki/ca.pem \\
  --service-account-private-key-file=/etc/kubernetes/pki/ca-key.pem \\
  --leader-elect=true \\
  --feature-gates=RotateKubeletServerCertificate=true \\
  --controllers=*,bootstrapsigner,tokencleaner \\
  --horizontal-pod-autoscaler-use-rest-clients=true \\
  --horizontal-pod-autoscaler-sync-period=10s \\
  --tls-cert-file=/etc/kubernetes/pki/kube-controller-manager.pem \\
  --tls-private-key-file=/etc/kubernetes/pki/kube-controller-manager-key.pem \\
  --use-service-account-credentials=true \\
  --alsologtostderr=true \\
  --logtostderr=false \\
  --log-dir=/var/log/kubernetes \\
  --v=2"

7.4 创建服务启动文件

$ cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/conf/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

7.5 启动服务

$ systemctl daemon-reload
$ systemctl enable kube-controller-manager.service 
$ systemctl start  kube-controller-manager.service 
$ systemctl status kube-controller-manager.service 

八、部署kube-scheduler

文档:https://kubernetes.io/zh-cn/docs/reference/command-line-tools-reference/kube-scheduler/

8.1 颁发kube-scheduler证书

1)创建csr请求文件

$ cfssl  print-defaults csr  > kube-scheduler-csr.json
$ cat <<EOF > kube-scheduler-csr.json

    "CN": "system:kube-scheduler",
    "hosts": [
        "127.0.0.1",
        "192.168.2.10"
    ],
    "key": 
        "algo": "rsa",
        "size": 2048
    ,
    "names": [
        
            "C": "CN",
            "ST": "BeiJing",
            "L": "BeiJing",
            "O": "system:kube-scheduler",
            "OU": "system"
        
    ]

EOF

hosts 列表包含所有 kube-scheduler 节点 IP;
CN 为 system:kube-scheduler、O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。

2)生成证书

$ cfssl gencert -ca=/etc/kubernetes/pki/ca.pem  \\
-ca-key=/etc/kubernetes/pki/ca-key.pem \\
-config=ca-config.json \\
-profile=kubernetes kube-scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-scheduler
$ ls /etc/kubernetes/pki/kube-scheduler*
kube-scheduler.csr  kube-scheduler-key.pem  kube-scheduler.pem

8.2 创建kubeconfig文件

1)设置一个集群项

​ 创建kube-scheduler的kubeconfig文件,需要先设置一个集群参数(即设置一个集群项)。

$ kubectl config set-cluster kubernetes \\
--embed-certs=true \\
--certificate-authority=/etc/kubernetes/pki/ca.pem \\
--server=https://192.168.2.10:6443  \\
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig

2)设置一个用户项

$ kubectl config set-credentials system:kube-scheduler \\
--embed-certs=true \\
--client-certificate=/etc/kubernetes/pki/kube-scheduler.pem \\
--client-key=/etc/kubernetes/pki/kube-scheduler-key.pem  \\
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig

3)设置一个环境项

设置上下文环境

$ kubectl config set-context system:kube-scheduler \\
--cluster=kubernetes --user=system:kube-scheduler \\
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig

4) 设置默认上下文

$ kubectl config use-context system:kube-scheduler \\
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig

8.3 创建kube-scheduler配置文件

$ cat <<EOF > /etc/kubernetes/conf/kube-scheduler.conf
KUBE_SCHEDULER_OPTS="--bind-address=127.0.0.1 \\
--kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
--authentication-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
--authorization-kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \\
--leader-elect=true \\
--alsologtostderr=true \\
--logtostderr=false \\
--log-dir=/var/log/kubernetes \\
--v=2"
EOF

--authentication-kubeconfig: 指定kubeconfig 文件(可选),为空则所有令牌请求均被视为匿名请求,并且不会在集群中查找任何客户端 CA。

--authorization-kubeconfig: 指定kubeconfig 文件(可选)。 如果为空,则所有未被鉴权机制略过的请求都会被禁止。

--bind-address:监听 --secure-port 端口的 IP 地址(默认0.0.0.0),

--leader-elect: 是否选举领导者,高可用集群可启用此标志(默认:true)

--kubeconfig: 指定kubeconfig文件(已弃用)

--logtostderr: 日志记录到标准错误输出而不是文件(默认:true)

--master: Kubernetes API 服务器的地址(覆盖 kubeconfig 中的任何值)。

8.4 创建服务启动文件

$ cat <<EOF > /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
EnvironmentFile=-/etc/kubernetes/conf/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler \\$KUBE_SCHEDULER_OPTS
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

8.5 启动服务

$ systemctl daemon-reload
$ systemctl enable kube-scheduler.service
$ systemctl start kube-scheduler.service
$ systemctl status kube-scheduler.service

九、部署kubelet服务

如果想在master也允许业务Pod,可在master节点也部署kubelet、kube-proxy服务。

9.1 创建kubelet-bootstrap.kubeconfig

$ BOOTSTRAP_TOKEN=$(awk -F "," print $1 /etc/kubernetes/token.csv)

1) 设置集群参数

$ kubectl config set-cluster kubernetes  --embed-certs=true \\
--certificate-authority=/etc/kubernetes/pki/ca.pem \\
--server=https://192.168.2.10:6443 \\
--kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig

2) 设置客户端认证参数

$ kubectl config set-credentials kubelet-bootstrap \\
--token=$BOOTSTRAP_TOKEN \\
--kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig

3) 设置上下文参数

$ kubectl config set-context default --cluster=kubernetes \\
--user=kubelet-bootstrap \\
--kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig

4)设置默认上下文

$ kubectl config use-context default --kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig

5) 创建角色绑定

$ kubectl create clusterrolebinding kubelet-bootstrap \\
--user=kubelet-bootstrap  --clusterrole=system:node-bootstrapper
$ kubectl get clusterrolebinding |  grep kubelet-bootstrap
kubelet-bootstrap    ClusterRole/system:node-bootstrapper         10s

9.2 创建kubelet配置文件

$ cat <<EOF > /etc/kubernetes/conf/kubelet.json

  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": 
    "x509": 
      "clientCAFile": "/etc/kubernetes/pki/ca.pem"
    ,
    "webhook": 
      "enabled": true,
      "cacheTTL": "2m0s"
    ,
    "anonymous": 
      "enabled": false
    
  ,
  "authorization": 
    "mode": "Webhook",
    "webhook": 
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    
  ,
  "address": "192.168.2.10",
  "port": 10250,
  "readOnlyPort": 10255,
  "cgroupDriver": "systemd",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "featureGates": 
    "RotateKubeletClientCertificate": true,
    "RotateKubeletServerCertificate": true
  ,
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.96.0.10"]

EOF

cgroupDriver: 如果docker的驱动为systemd,处修改为systemd。此处设置很重要,否则后面node节点无法加入到集群。

clusterDNS:DNS 服务器的 IP 地址,以逗号分隔。此标志值用于 Pod 中设置了“dnsPolicy=ClusterFirst” 时为容器提供 DNS 服务。通常为Service网段的第10个IP

注意:列表中出现的所有 DNS 服务器必须包含相同的记录组, 否则集群中的名称解析可能无法正常工作。

9.3 创建服务启动文件

$ cat <<EOF > /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \\
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
  --cert-dir=/etc/kubernetes/pki \\
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
  --config=/etc/kubernetes/conf/kubelet.json \\
  --network-plugin=cni \\
  --pod-infra-container-image=k8s.gcr.io/pause:3.2 \\
  --alsologtostderr=true \\
  --logtostderr=false \\
  --log-dir=/var/log/kubernetes \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

9.4 启动kubelet服务

$ mkdir /var/lib/kubelet     #创建kubelet工作目录
$ systemctl daemon-reload
$ systemctl enable kubelet
$ systemctl start kubelet
$ systemctl status kubelet

9.5 查看收到的CSR请求

确认kubelet服务启动成功后,接着到master上Approve一下bootstrap请求。执行如下命令可以看到三个worker节点分别发送了三个 CSR 请求:

$ kubectl get csr
NAME         AGE    SIGNERNAME          REQUESTOR           CONDITION
node-csr-hlbLCTkZFZe6f62F2NYacepBa6IqWTD9NzQGNu7qPjc   4m9s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

可以看到该csr请求处于Pending状态。

9.6 master接收请求

在master节点根据接收到的csr请求,允许该请求加入集群!

$ kubectl certificate approve $(kubectl get csr  |  awk  print $1 | grep -v NAME)
$ kubectl get csr
 kubectl get csr
NAME         AGE     SIGNERNAME       REQUESTOR       CONDITION
node-csr-hlbLCTkZFZe6f62F2NYacepBa6IqWTD9NzQGNu7qPjc   7m54s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Approved,Issued
$ kubectl get nodes

可以看到get csrCONDITIONPending状态变为Approved,Issued状态!

$ cat /etc/kubernetes/kubelet.kubeconfig

当node节点加入集群后,会自动生成/etc/kubernetes/kubelet.kubeconfig证书配置文件。

十、部署kube-Proxy服务

10.1 颁发证书

1)创建csr请求文件

$ cfssl  print-defaults csr  > kube-proxy-csr.json
$ cat <<EOF > kube-proxy-csr.json

  "CN": "system:kube-proxy",
  "key": 
    "algo": "rsa",
    "size": 2048
  ,
  "names": [
    
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "system"
    
  ]

EOF

2)生成证书

$  cfssl gencert  \\
-ca=/etc/kubernetes/pki/ca.pem  \\
-ca-key=/etc/kubernetes/pki/ca-key.pem \\
-config=ca-config.json  -profile=kubernetes \\
kube-proxy-csr.json | cfssljson -bare  /etc/kubernetes/pki/kube-proxy

10.2 创建kubeconfig文件

1)设置一个集群项

$ kubectl config set-cluster kubernetes --embed-certs=true \\
--certificate-authority=/etc/kubernetes/pki/ca.pem \\
--server=https://192.168.2.10:6443  \\
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

2)设置一个用户项

$ kubectl config set-credentials kube-proxy --embed-certs=true \\
--client-certificate=/etc/kubernetes/pki/kube-proxy.pem \\
--client-key=/etc/kubernetes/pki/kube-proxy-key.pem \\
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

3)设置一个环境项

$ kubectl config set-context default \\
--cluster=kubernetes --user=kube-proxy \\
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

4)设置默认上下文

$ kubectl config use-context default --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

10.3 创建配置文件

$ cat <<EOF > /etc/kubernetes/conf/kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.2.10
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 10.244.0.0/16 
healthzBindAddress: 192.168.2.10:10256
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.2.10:10249
mode: "ipvs"
EOF

clusterCIDR: 此处网段必须与网络组件网段保持一致,否则部署网络组件时会报错;

healthzBindAddress: 服务健康检查的 IP 地址和端口;

metricsBindAddress:metrics 服务器要使用的 IP 地址和端口;

10.4 创建服务启动文件

$ cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \\
  --config=/etc/kubernetes/conf/kube-proxy.yaml \\
  --alsologtostderr=true \\
  --logtostderr=false \\
  --log-dir=/var/log/kubernetes \\
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

10.5 启动kube-proxy服务

$ mkdir -p /var/lib/kube-proxy
$ systemctl daemon-reload
$ systemctl enable kube-proxy --now
$ systemctl status kube-proxy

十一、node节点扩容

将node1节点(192.168.2.11)扩容到kubernetes集群中。

  • 从master节点拷贝相关证书、认证文件到node节点上
  • 部署kubelet服务
  • 部署kube-proxy服务

11.1 拷贝证书文件

#拷贝CA证书:
$ scp  192.168.2.10:/etc/kubernetes/pki/ca*    /etc/kubernetes/pki/

#拷贝kubelet认证文件:
$ scp 192.168.2.10:/etc/kubernetes/kubelet-bootstrap.kubeconfig  /etc/kubernetes/kubelet-bootstrap.kubeconfig
#拷贝kube-proxy.kubeconfig文件
$ scp  192.168.2.10:/etc/kubernetes/kube-proxy.kubeconfig   /etc/kubernetes/kube-proxy.kubeconfig

11.2 部署kubelet服务

1)创建配置文件

$ cat  /etc/kubernetes/conf/kubelet.json

  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": 
    "x509": 
      "clientCAFile": "/etc/kubernetes/pki/ca.pem"
    ,
    "webhook": 
      "enabled": true,
      "cacheTTL": "2m0s"
    ,
    "anonymous": 
      "enabled": false
    
  ,
  "authorization": 
    "mode": "Webhook",
    "webhook": 
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    
  ,
  "address": "192.168.2.11",   #修改为本机IP
  "port": 10250,
  "readOnlyPort": 10255,
  "cgroupDriver": "systemd",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "featureGates": 
    "RotateKubeletClientCertificate": true,
    "RotateKubeletServerCertificate": true
  ,
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.96.0.10"]

2)创建服务启动文件

$ cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \\
  --bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig \\
  --cert-dir=/etc/kubernetes/pki \\
  --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
  --config=/etc/kubernetes/conf/kubelet.json \\
  --network-plugin=cni \\
  --pod-infra-container-image=k8s.gcr.io/pause:3.2 \\
  --alsologtostderr=true \\
  --logtostderr=false \\
  --log-dir=/var/log/kubernetes \\
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

3)启动kubelet服务

$ mkdir /var/lib/kubelet     #创建kubelet工作目录
$ systemctl daemon-reload
$ systemctl enable kubelet
$ systemctl start kubelet
$ systemctl status kubelet

11.3 部署kube-proxy

1)创建配置文件

$ $ cat <<EOF > /etc/kubernetes/conf/kube-proxy.yaml
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.2.11
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
clusterCIDR: 10.244.0.0/16 
healthzBindAddress: 192.168.2.11:10256
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.2.11:10249
mode: "ipvs"
EOF

2)创建服务启动文件

$ cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \\
  --config=/etc/kubernetes/conf/kube-proxy.yaml \\
  --alsologtostderr=true \\
  --logtostderr=false \\
  --log-dir=/var/log/kubernetes \\
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

3) 启动服务

$ mkdir -p /var/lib/kube-proxy
$ systemctl daemon-reload
$ systemctl enable kube-proxy --now
$ systemctl status kube-proxy

十二、部署集群插件

  • 安装calico网络插件
  • 安装CoreDNS插件
  • 安装metrics数据采集插件;
  • 安装nginx-ingress-controller

12.1 安装Calico网络插件

1)下载资源清单文件

$ wget https://docs.projectcalico.org/v3.18/manifests/calico.yaml --no-check-certificate

2)修改配置参数

$ vim calico.yaml           
  - name: CALICO_IPV4POOL_CIDR
   value: "10.244.0.0/16"

修改为自己的Pod网段。

3)创建资源

$ kubectl apply -f calico.yaml       #创建资源对象
$ kubectl -n kube-system get pod |  grep calico    #查看Pod状态
NAME                           READY   STATUS    RESTARTS  AGE
calico-kube-controllers-56c7cdffc6-w9xnr   1/1     Running   0      18m
calico-node-j9b4r                  1/1     Running   0      18m
calico-node-xwzc2                  1/1     Running   0      18m

12.2 安装CoreDNS插件

kubernetes 1.20.x对应的是CoreDNS v1.7.0,版本对应关系见:

https://github.com/coredns/deployment/blob/master/kubernetes/CoreDNS-k8s_version.md

1)下载资源清单文件模板

$ wget https://raw.githubusercontent.com/coredns/deployment/master/kubernetes/coredns.yaml.sed
$ mv coredns.yaml.sed coredns.yaml
$ sed -i /clusterIP/s/CLUSTER_DNS_IP/10.96.0.10/ coredns.yaml
$ grep clusterIP coredns.yaml     

clusterIP: 需要和kubelet配置文件中的clusterDNS的值一致。

2)创建资源

$ kubectl apply -f coredns.yaml 

3)下载镜像

$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0
$ docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0 k8s.gcr.io/coredns:1.7.0
$ docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0

4)查看服务状态

$ kubectl -n kube-system  get pod |  grep dns
coredns-7bf4bd64bd-hpwnw    1/1     Running   0     3m8s
$ kubectl -n kube-system  get  svc
NAME     TYPE     CLUSTER-IP  EXTERNAL-IP  PORT(S)            AGE
kube-dns  ClusterIP  10.96.0.10   <none>    53/UDP,53/TCP,9153/TCP  2m19s

5)服务验证

查看Pod日志

$ kubectl -n kube-system  logs  coredns-7bf4bd64bd-msg22   
.:53
[INFO] plugin/reload: Running configuration MD5 = b0741fcbd8bd79287446297caa87f7a1
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d

创建测试Pod

$ cat nginx-test.yaml
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-controller
spec:
  replicas: 2
  selector:
    name: nginx
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.19.6
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service-nodeport
spec:
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30001
      protocol: TCP
  type: NodePort
  selector:
    name: nginx
$ kubectl apply -f nginx-test.yaml

然后浏览器访问nginx服务(能访问到说明正常)。


12.3 安装Merics数据采集插件

需要在kube-apiserver的配置文件中添加--enable-aggregator-routing=true参数来启用聚合认证。

修改证书配置参数(.csr改为.pem)

  • 下载资源清单文件
$ wget  https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.5.2/components.yaml
  • 修改资源清单文件
$ vim components.yaml
...
containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls   #跳过验证SSL证书
  • 下载镜像
$ grep image: components.yaml   #查看需要的镜像
   image: k8s.gcr.io/metrics-server/metrics-server:v0.6.1
$ docker  pull registry.aliyuncs.com/google_containers/metrics-server:v0.6.1
$ docker tag  registry.aliyuncs.com/google_containers/metrics-server:v0.6.1   k8s.gcr.io/metrics-server/metrics-server:v0.6.1
  • 创建资源
$ kubectl apply   -f components.yaml 
$ kubectl -n kube-system get pod metrics-server-54d4f7d9cf-phfnz 
NAME                     READY   STATUS    RESTARTS   AGE
metrics-server-54d4f7d9cf-phfnz   1/1     Running   0       69s
  • 查看节点资源
$ kubectl top nodes
NAME          CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
lidabai-master   119m       5%    1694Mi        44%       
lidabai-node1    67m       3%     813Mi        21%     

能看到节点的CPU、内存资源则表示Metrics服务正常!


12.4 安装nginx-ingress-controller插件


12.5 安装Helm包管理工具

在集群的其中一台master节点安装即可!

$ wget https://get.helm.sh/helm-v3.7.2-linux-amd64.tar.gz
$ tar zxvf helm-v3.7.2-linux-amd64.tar.gz
$ cp  linux-amd64/helm  /usr/local/bin/
$ helm version

十三、集群验证

查看集群节点状态

$ kubectl get nodes
NAME          STATUS     ROLES    AGE   VERSION
lidabai-master   NotReady   <none>   14s   v1.20.15
lidabai-node1    NotReady   <none>   33m   v1.20.15

验证集群是否正常!

云原生Kubernetes系列第三篇二进制部署单节点Kubernetes(k8s)v1.20(不要因为别人都在交卷,自己就乱写答案)

K8S 二进制集群部署--------单master集群

K8S------Kubernetes单Master集群二进制搭建

K8S------Kubernetes单Master集群二进制搭建

K8S------Kubernetes单Master集群二进制搭建

k8s单节点集群二进制部署(步骤详细,图文详解)