k8s集群之master节点部署
Posted rdchenxi
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了k8s集群之master节点部署相关的知识,希望对你有一定的参考价值。
apiserver的部署
api-server的部署脚本 [[email protected] k8s]# cat apiserver.sh #!/bin/bash MASTER_ADDRESS=$1 主节点IP ETCD_SERVERS=$2 etcd地址 cat <<EOF >/opt/kubernetes/cfg/kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=true --v=4 --etcd-servers=${ETCD_SERVERS} --bind-address=${MASTER_ADDRESS} --secure-port=6443 --advertise-address=${MASTER_ADDRESS} --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOF cat <<EOF >/usr/lib/systemd/system/kube-apiserver.service [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-apiserver systemctl restart kube-apiserver
下载二进制包
[[email protected] k8s]# wget https://dl.k8s.io/v1.10.13/kubernetes-server-linux-amd64.tar.gz
解压安装
[[email protected] k8s]# tar xf kubernetes-server-linux-amd64.tar.gz [[email protected] k8s]# cd kubernetes/server/bin/ [[email protected] bin]# ls apiextensions-apiserver cloud-controller-manager.tar kube-apiserver kube-controller-manager kubectl kube-proxy.docker_tag kube-scheduler.docker_tag cloud-controller-manager hyperkube kube-apiserver.docker_tag kube-controller-manager.docker_tag kubelet kube-proxy.tar kube-scheduler.tar cloud-controller-manager.docker_tag kubeadm kube-apiserver.tar kube-controller-manager.tar kube-proxy kube-scheduler mounter [[email protected] ~]# mkdir /opt/kubernetes/{cfg,ssl,bin} -pv mkdir: 已创建目录 "/opt/kubernetes" mkdir: 已创建目录 "/opt/kubernetes/cfg" mkdir: 已创建目录 "/opt/kubernetes/ssl" mkdir: 已创建目录 "/opt/kubernetes/bin" [[email protected] bin]# cp kube-apiserver kube-controller-manager kube-scheduler /opt/kubernetes/bin/ [[email protected] k8s]# ./apiserver.sh 192.168.10.11 https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379 [[email protected] k8s]# cd /opt/kubernetes/cfg/ [[email protected] cfg]# vi kube-apiserver KUBE_APISERVER_OPTS="--logtostderr=false --log-dir=/opt/kubernetes/logs 定义日志目录;注意创建此目录 --v=4 --etcd-servers=https://192.168.10.11:2379,https://192.168.10.12:2379,https://192.168.10.13:2379 --bind-address=192.168.10.11 绑定的IP地址 --secure-port=6443 端口基于https通信的 --advertise-address=192.168.10.11 集群通告地址;其他节点访问通告这个IP --allow-privileged=true 容器层的授权 --service-cluster-ip-range=10.0.0.0/24 负责均衡的虚拟IP --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction 启用准入插件;决定是否要启用一些高级功能 --authorization-mode=RBAC,Node 认证模式 --kubelet-https=true api-server主动访问kubelet是使用https协议 --enable-bootstrap-token-auth 认证客户端并实现自动颁发证书 --token-auth-file=/opt/kubernetes/cfg/token.csv 指定token文件 --service-node-port-range=30000-50000 node认证端口范围 --tls-cert-file=/opt/kubernetes/ssl/server.pem apiserver 证书文件 --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem ca证书 --etcd-cafile=/opt/etcd/ssl/ca.pem etcd 证书 --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem"
生成证书与token文件
[[email protected] k8s]# cat k8s-cert.sh cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "expiry": "87600h", "usages": [ "signing", "key encipherment", "server auth", "client auth" ] } } } } EOF cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "Beijing", "ST": "Beijing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -initca ca-csr.json | cfssljson -bare ca - #----------------------- cat > server-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "10.0.0.1", "127.0.0.1", "10.206.176.19", master IP "10.206.240.188", LB;node节点不用写,写上也不错 "10.206.240.189", LB: "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server #----------------------- cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin #----------------------- cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "L": "BeiJing", "ST": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy [[email protected] k8s]# bash k8s-cert.sh 2019/04/22 18:05:08 [INFO] generating a new CA key and certificate from CSR 2019/04/22 18:05:08 [INFO] generate received request 2019/04/22 18:05:08 [INFO] received CSR 2019/04/22 18:05:08 [INFO] generating key: rsa-2048 2019/04/22 18:05:09 [INFO] encoded CSR 2019/04/22 18:05:09 [INFO] signed certificate with serial number 631400127737303589248201910249856863284562827982 2019/04/22 18:05:09 [INFO] generate received request 2019/04/22 18:05:09 [INFO] received CSR 2019/04/22 18:05:09 [INFO] generating key: rsa-2048 2019/04/22 18:05:10 [INFO] encoded CSR 2019/04/22 18:05:10 [INFO] signed certificate with serial number 99345466047844052770348056449571016254842578399 2019/04/22 18:05:10 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). 2019/04/22 18:05:10 [INFO] generate received request 2019/04/22 18:05:10 [INFO] received CSR 2019/04/22 18:05:10 [INFO] generating key: rsa-2048 2019/04/22 18:05:11 [INFO] encoded CSR 2019/04/22 18:05:11 [INFO] signed certificate with serial number 309283889504556884051139822527420141544215396891 2019/04/22 18:05:11 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). 2019/04/22 18:05:11 [INFO] generate received request 2019/04/22 18:05:11 [INFO] received CSR 2019/04/22 18:05:11 [INFO] generating key: rsa-2048 2019/04/22 18:05:11 [INFO] encoded CSR 2019/04/22 18:05:11 [INFO] signed certificate with serial number 286610519064253595846587034459149175950956557113 2019/04/22 18:05:11 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements"). [[email protected] k8s]# ls admin.csr apiserver.sh ca-key.pem etcd-cert.sh kube-proxy.csr kubernetes scheduler.sh server.pem admin-csr.json ca-config.json ca.pem etcd.sh kube-proxy-csr.json kubernetes-server-linux-amd64.tar.gz server.csr admin-key.pem ca.csr controller-manager.sh k8s-cert kube-proxy-key.pem kubernetes.tar.gz server-csr.json admin.pem ca-csr.json etcd-cert k8s-cert.sh kube-proxy.pem master.zip
[[email protected] k8s]# cp ca-key.pem ca.pem server-key.pem server.pem /opt/kubernetes/ssl/
[[email protected] k8s]# cat token.csv 0fb61c46f8991b718eb38d27b605b008,kubelet-bootstrap,10001,"system:kubelet-bootstrap" [[email protected] k8s]# mv token.csv /opt/kubernetes/cfg/
启动apiserver
[[email protected] k8s]# systemctl start kube-apiserver [[email protected] k8s]# ps -ef | grep apiserver root 3264 1 99 20:35 ? 00:00:01 /opt/kubernetes/bin/kube-apiserver --logtostderr=false --log-dir=/opt/kubernetes/logs --v=4 --etcd-servers=https://192.168.10.11:2379,https:/ /192.168.10.12:2379,https://192.168.10.13:2379 --bind-address=192.168.10.11 --secure-port=6443 --advertise-address=192.168.10.11 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pemroot 3274 1397 0 20:35 pts/0 00:00:00 grep --color=auto apiserver
生成配置文件并启动controller-manager
[[email protected] k8s]# cat controller-manager.sh #!/bin/bash MASTER_ADDRESS=$1 cat <<EOF >/opt/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \ 日志配置 --v=4 --master=${MASTER_ADDRESS}:8080 \ apimaster端口 --leader-elect=true --address=127.0.0.1 \ --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem --root-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --experimental-cluster-signing-duration=87600h0m0s" EOF cat <<EOF >/usr/lib/systemd/system/kube-controller-manager.service [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-controller-manager systemctl restart kube-controller-manager [[email protected] k8s]# bash controller-manager.sh 127.0.0.1 输入masterIP [[email protected] k8s]# ss -lntp State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 192.168.10.11:6443 *:* users:(("kube-apiserver",pid=7604,fd=6))LISTEN 0 128 192.168.10.11:2379 *:* users:(("etcd",pid=1428,fd=7))LISTEN 0 128 127.0.0.1:2379 *:* users:(("etcd",pid=1428,fd=6))LISTEN 0 128 127.0.0.1:10252 *:* users:(("kube-controller",pid=7593,fd=3))LISTEN 0 128 192.168.10.11:2380 *:* users:(("etcd",pid=1428,fd=5))LISTEN 0 128 127.0.0.1:8080 *:* users:(("kube-apiserver",pid=7604,fd=5))LISTEN 0 128 *:22 *:* users:(("sshd",pid=902,fd=3))LISTEN 0 100 127.0.0.1:25 *:* users:(("master",pid=1102,fd=13))LISTEN 0 128 :::10257 :::* users:(("kube-controller",pid=7593,fd=5))LISTEN 0 128 :::22 :::* users:(("sshd",pid=902,fd=4))LISTEN 0 100 ::1:25 :::* users:(("master",pid=1102,fd=14))
生成配置文件,并启动scheduler
[[email protected] k8s]# cat scheduler.sh #!/bin/bash MASTER_ADDRESS=$1 cat <<EOF >/opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=${MASTER_ADDRESS}:8080 --leader-elect" EOF cat <<EOF >/usr/lib/systemd/system/kube-scheduler.service [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF systemctl daemon-reload systemctl enable kube-scheduler systemctl restart kube-scheduler [[email protected] k8s]# bash scheduler.sh 127.0.0.1 [[email protected] k8s]# ss -lntp State Recv-Q Send-Q Local Address:Port Peer Address:Port LISTEN 0 128 192.168.10.11:2379 *:* users:(("etcd",pid=1428,fd=7))LISTEN 0 128 127.0.0.1:2379 *:* users:(("etcd",pid=1428,fd=6))LISTEN 0 128 127.0.0.1:10252 *:* users:(("kube-controller",pid=7809,fd=3))LISTEN 0 128 192.168.10.11:2380 *:* users:(("etcd",pid=1428,fd=5))LISTEN 0 128 *:22 *:* users:(("sshd",pid=902,fd=3))LISTEN 0 100 127.0.0.1:25 *:* users:(("master",pid=1102,fd=13))LISTEN 0 128 :::10251 :::* users:(("kube-scheduler",pid=8073,fd=3))LISTEN 0 128 :::10257 :::* users:(("kube-controller",pid=7809,fd=5))LISTEN 0 128 :::22 :::* users:(("sshd",pid=902,fd=4))LISTEN 0 100 ::1:25 :::* users:(("master",pid=1102,fd=14))
配置文件
[[email protected] k8s]# cat /opt/kubernetes/cfg/kube-controller-manager KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 API连接地址 --leader-elect=true 自动做高可用选举 --address=127.0.0.1 地址,不对外提供服务 --service-cluster-ip-range=10.0.0.0/24 地址范围与apiserver配置一样 --cluster-name=kubernetes 名字 --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem 签名 --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem 签名 --root-ca-file=/opt/kubernetes/ssl/ca.pem 根证书 --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --experimental-cluster-signing-duration=87600h0m0s" 有效时间
配置文件
[[email protected] k8s]# cat /opt/kubernetes/cfg/kube-scheduler KUBE_SCHEDULER_OPTS="--logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect"
将客户端工具复制到/usr/bin目录下
[[email protected] k8s]# cp kubernetes/server/bin/kubectl /usr/bin/
查看集群状态
[[email protected] k8s]# kubectl get cs NAME STATUS MESSAGE ERROR scheduler Healthy ok etcd-2 Healthy {"health":"true"} etcd-1 Healthy {"health":"true"} etcd-0 Healthy {"health":"true"} controller-manager Healthy ok
以上是关于k8s集群之master节点部署的主要内容,如果未能解决你的问题,请参考以下文章
Kubernetes 集群 之 二进制安装部署(单Master节点)
Kubernetes 集群 之 二进制安装部署(单Master节点)