k8s系列-16-worker节点安装
Posted 公号运维家
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了k8s系列-16-worker节点安装相关的知识,希望对你有一定的参考价值。
都知道k8s分为master和worker,上一篇我们部署了master,那么本篇我们来部署worker节点,在worker节点上我们需要部署kubelet、kube-proxy、container runtime、cni和nginx-proxy,看起来服务比master节点还多哈,不过不要着急,我们一步一步来操作。
配置container-runtime
PS:该步骤需要在两个worker节点上分别执行,我的两个worker节点分别是node2和node3。
1、下载软件
# 设定版本号
[root@node2 ~]# VERSION=1.4.3
# 下载
[root@node2 ~]# wget https://github.com/containerd/containerd/releases/download/v$VERSION/cri-containerd-cni-$VERSION-linux-amd64.tar.gz
注意,如果无法下载该软件,请前往公众号“运维家”后台回复“cri-containerd”,即可获取软件安装包。
2、解压软件包
# 解压
[root@node2 ~]# tar -xvf cri-containerd-cni-$VERSION-linux-amd64.tar.gz
# 复制文件
[root@node2 ~]# cp etc/crictl.yaml /etc/
[root@node2 ~]# cp etc/systemd/system/containerd.service /etc/systemd/system/
[root@node2 ~]# cp -r usr /
3、containerd配置文件
# 创建配置目录
[root@node2 ~]# mkdir -p /etc/containerd
# 生成配置文件
[root@node2 ~]# containerd config default > /etc/containerd/config.toml
# 自选配置,比如说你有挂载磁盘比较大的目录,你可以修改下存储目录等
[root@node2 ~]# vim /etc/containerd/config.toml
4、启动containerd服务
[root@node2 ~]# systemctl enable containerd
Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /etc/systemd/system/containerd.service.
[root@node2 ~]# systemctl restart containerd
[root@node2 ~]# systemctl status containerd
● containerd.service - containerd container runtime
Loaded: loaded (/etc/systemd/system/containerd.service; enabled; vendor preset: disabled)
Active: active (running) since 六 2022-03-19 22:27:56 CST; 23s ago
Docs: https://containerd.io
Process: 8034 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Main PID: 8038 (containerd)
Tasks: 8
Memory: 19.4M
CGroup: /system.slice/containerd.service
└─8038 /usr/local/bin/containerd
3月 19 22:27:56 node2 containerd[8038]: time="2022-03-19T22:27:56.154498095+08:00" level=info msg="Start subscribing containerd event"
3月 19 22:27:56 node2 containerd[8038]: time="2022-03-19T22:27:56.155149545+08:00" level=info msg="Start recovering state"
3月 19 22:27:56 node2 containerd[8038]: time="2022-03-19T22:27:56.155233071+08:00" level=info msg="Start event monitor"
3月 19 22:27:56 node2 containerd[8038]: time="2022-03-19T22:27:56.155269850+08:00" level=info msg="Start snapshots syncer"
3月 19 22:27:56 node2 containerd[8038]: time="2022-03-19T22:27:56.155279889+08:00" level=info msg="Start cni network conf syncer"
3月 19 22:27:56 node2 containerd[8038]: time="2022-03-19T22:27:56.155284057+08:00" level=info msg="Start streaming server"
3月 19 22:27:56 node2 systemd[1]: Started containerd container runtime.
3月 19 22:27:56 node2 containerd[8038]: time="2022-03-19T22:27:56.164126975+08:00" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
3月 19 22:27:56 node2 containerd[8038]: time="2022-03-19T22:27:56.164164104+08:00" level=info msg=serving... address=/run/containerd/containerd.sock
3月 19 22:27:56 node2 containerd[8038]: time="2022-03-19T22:27:56.164200622+08:00" level=info msg="containerd successfully booted in 0.090964s"
[root@node2 ~]#
配置kubelet
PS:该步骤需要在两个worker节点上分别执行
1、准备kubelet配置
# 创建存放证书的目录
[root@node2 ~]# mkdir -p /etc/kubernetes/ssl/
# 申明该节点的hostname
[root@node2 ~]# HOSTNAME=node2
# 复制证书到相关目录
# 如果你的架构完全按照我的文档来的,那么下一步你在node2节点上执行可能会报错
# 因为在配置master节点的时候,已经把相关证书移动到执行目录里面了,直接pass该报错即可
[root@node2 ~]# mv $HOSTNAME-key.pem $HOSTNAME.pem ca.pem ca-key.pem /etc/kubernetes/ssl/
# 继续移动
[root@node2 ~]# mv $HOSTNAME.kubeconfig /etc/kubernetes/kubeconfig
# 申明该节点的IP地址
[root@node2 ~]# IP=192.168.112.131
# 写入配置文件
[root@node2 ~]# cat <<EOF > /etc/kubernetes/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/etc/kubernetes/ssl/ca.pem"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "169.254.25.10"
podCIDR: "10.200.0.0/16"
address: $IP
readOnlyPort: 0
staticPodPath: /etc/kubernetes/manifests
healthzPort: 10248
healthzBindAddress: 127.0.0.1
kubeletCgroups: /systemd/system.slice
resolvConf: "/etc/resolv.conf"
runtimeRequestTimeout: "15m"
kubeReserved:
cpu: 200m
memory: 512M
tlsCertFile: "/etc/kubernetes/ssl/$HOSTNAME.pem"
tlsPrivateKeyFile: "/etc/kubernetes/ssl/$HOSTNAME-key.pem"
EOF
[root@node2 ~]#
2、配置kubelet服务
[root@node2 ~]# cat <<EOF > /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \\\\
--config=/etc/kubernetes/kubelet-config.yaml \\\\
--container-runtime=remote \\\\
--container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \\\\
--image-pull-progress-deadline=2m \\\\
--kubeconfig=/etc/kubernetes/kubeconfig \\\\
--network-plugin=cni \\\\
--node-ip=$IP \\\\
--register-node=true \\\\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
[root@node2 ~]#
配置nginx-proxy
该服务是干啥的呢?顾名思义是一个代理,那么代理的是什么呢?代理的是worker节点访问apiserver,是apiserver的一个高可用方案,是为了让每个服务都可以负载均衡的调用到apiserver上。
由于nginx-proxy代理的是apiserver的6443端口,但是我们的nginx也是用6443端口代理的,那么是不是必须在没有部署apiserver的节点上部署该服务呢,在咱们的集群中,哪些节点是没有部署apiserver的呢?简单了,worker节点没有部署apiserver,好,理解了哈?
该nginx-proxy在咱们的集群中,只需要在node3节点上部署,因为只有它是一个纯worker节点。
1、nginx配置文件
# 创建nginx配置文件目录
[root@node3 ~]# mkdir -p /etc/nginx
# 指定master的IP地址
[root@node3 ~]# MASTER_IPS=(192.168.112.130 192.168.112.131)
# 生成nginx配置文件,需要注意的是
# 如果你不和我的集群一致,不是两个master节点,需要修改如下的stream配置
# 有几个master节点,就写几行,其他的都保持一致即可。
[root@node3 ~]# cat <<EOF > /etc/nginx/nginx.conf
error_log stderr notice;
worker_processes 2;
worker_rlimit_nofile 130048;
worker_shutdown_timeout 10s;
events
multi_accept on;
use epoll;
worker_connections 16384;
stream
upstream kube_apiserver
least_conn;
server $MASTER_IPS[0]:6443;
server $MASTER_IPS[1]:6443;
server
listen 127.0.0.1:6443;
proxy_pass kube_apiserver;
proxy_timeout 10m;
proxy_connect_timeout 1s;
http
aio threads;
aio_write on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 5m;
keepalive_requests 100;
reset_timedout_connection on;
server_tokens off;
autoindex off;
server
listen 8081;
location /healthz
access_log off;
return 200;
location /stub_status
stub_status on;
access_log off;
EOF
[root@node3 ~]#
2、配置生成nginx-proxy的yaml文件
[root@node3 ~]# mkdir -p /etc/kubernetes/manifests/
[root@node3 ~]# cat <<EOF > /etc/kubernetes/manifests/nginx-proxy.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx-proxy
namespace: kube-system
labels:
addonmanager.kubernetes.io/mode: Reconcile
k8s-app: kube-nginx
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-node-critical
containers:
- name: nginx-proxy
image: docker.io/library/nginx:1.19
imagePullPolicy: IfNotPresent
resources:
requests:
cpu: 25m
memory: 32M
securityContext:
privileged: true
livenessProbe:
httpGet:
path: /healthz
port: 8081
readinessProbe:
httpGet:
path: /healthz
port: 8081
volumeMounts:
- mountPath: /etc/nginx
name: etc-nginx
readOnly: true
volumes:
- name: etc-nginx
hostPath:
path: /etc/nginx
EOF
[root@node3 ~]#
配置kube-proxy
剩余内容请转至VX公众号 “运维家” ,回复 “123” 查看。
以上是关于k8s系列-16-worker节点安装的主要内容,如果未能解决你的问题,请参考以下文章
JVM故障问题排查心得「内存诊断系列」Docker容器经常被kill掉,k8s中该节点的pod也被驱赶,怎么分析?