使用kubeadm+calico部署kubernetes v1.25.3
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了使用kubeadm+calico部署kubernetes v1.25.3相关的知识,希望对你有一定的参考价值。
1、环境准备
主机名 | IP地址 | 系统版本 |
k8s-master01 k8s-master01.wang.org kubeapi.wang.org kubeapi | 10.0.0.101 | Ubuntu2004 |
k8s-master02 k8s-master02.wang.org | 10.0.0.102 | Ubuntu2004 |
k8s-master03 k8s-master03.wang.org | 10.0.0.103 | Ubuntu2004 |
k8s-node01 k8s-node01.wang.org | 10.0.0.104 | Ubuntu2004 |
k8s-node02 k8s-node02.wang.org | 10.0.0.105 | Ubuntu2004 |
k8s-node03 k8s-node03.wang.org | 10.0.0.106 | Ubuntu2004 |
1-1、设置主机名
#所有节点执行:
[root@ubuntu2004 ~]#hostnamectl set-hostname k8s-master01
1-2、关闭防火墙
#所有节点执行:
[root@k8s-master01 ~]# ufw disable
[root@k8s-master01 ~]# ufw status
1-3、时间同步
#所有节点执行:
[root@k8s-master01 ~]# apt install -y chrony
[root@k8s-master01 ~]# systemctl restart chrony
[root@k8s-master01 ~]# systemctl status chrony
[root@k8s-master01 ~]# chronyc sources
1-4、主机名互相解析
#所有节点执行:
[root@k8s-master01 ~]#vim /etc/hosts
10.0.0.101 k8s-master01 k8s-master01.wang.org kubeapi.wang.org kubeapi
10.0.0.102 k8s-master02 k8s-master02.wang.org
10.0.0.103 k8s-master03 k8s-master03.wang.org
10.0.0.104 k8s-node01 k8s-node01.wang.org
10.0.0.105 k8s-node02 k8s-node02.wang.org
10.0.0.106 k8s-node03 k8s-node03.wang.org
1-5、禁用swap
#所有节点执行:
[root@k8s-master01 ~]# sed -r -i /\\/swap/s@^@#@ /etc/fstab
[root@k8s-master01 ~]# swapoff -a
[root@k8s-master01 ~]# systemctl --type swap
#若不禁用Swap设备,需要在后续编辑kubelet的配置文件/etc/default/kubelet,设置其忽略Swap启用的状态错误,内容:KUBELET_EXTRA_ARGS="--fail-swap-on=false"
2、安装docker
#所有节点执行:
#安装必要的一些系统工具
[root@k8s-master01 ~]# apt update
[root@k8s-master01 ~]# apt -y install apt-transport-https ca-certificates curl software-properties-common
#安装GPG证书
[root@k8s-master01 ~]# curl -fsSL http://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -
OK
#写入软件源信息
[root@k8s-master01 ~]# add-apt-repository "deb [arch=amd64] http://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable"
#更新并安装Docker-CE
[root@k8s-master01 ~]# apt update
[root@k8s-master01 ~]# apt install -y docker-ce
#所有节点执行:
kubelet需要让docker容器引擎使用systemd作为CGroup的驱动,其默认值为cgroupfs,因而,我们还需要编辑docker的配置文件/etc/docker/daemon.json,添加如下内容,其中的registry-mirrors用于指明使用的镜像加速服务。
[root@k8s-master01 ~]# vim /etc/docker/daemon.json
"registry-mirrors": [
"https://docker.mirrors.ustc.edu.cn",
"https://hub-mirror.c.163.com",
"https://reg-mirror.qiniu.com",
"https://registry.docker-cn.com"
],
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts":
"max-size": "200m"
,
"storage-driver": "overlay2"
[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl start docker
[root@k8s-master01 ~]# systemctl enable docker
[root@k8s-master01 ~]# docker version
Client: Docker Engine - Community
Version: 20.10.21
#注:kubeadm部署Kubernetes集群的过程中,默认使用Google的Registry服务k8s.gcr.io上的镜像,由于2022年仓库已经改为registry.k8s.io,国内可以直接访问,所以现在不需要镜像加速或者绿色上网就可以拉镜像了,如果使用国内镜像请参考https://blog.51cto.com/dayu/5811307
3、安装cri-dockerd
#所有节点执行:
#下载地址:https://github.com/Mirantis/cri-dockerd
[root@k8s-master01 ~]# apt install ./cri-dockerd_0.2.6.3-0.ubuntu-focal_amd64.deb -y
#完成安装后,相应的服务cri-dockerd.service便会自动启动
[root@k8s-master01 ~]#systemctl status cri-docker.service
4、安装kubeadm、kubelet和kubectl
#所有节点执行:
#在各主机上生成kubelet和kubeadm等相关程序包的仓库,可参考阿里云官网
[root@k8s-master01 ~]# apt update
[root@k8s-master01 ~]# apt install -y apt-transport-https curl
[root@k8s-master01 ~]# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
[root@k8s-master01 ~]#cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
#更新仓库并安装
[root@k8s-master01 ~]# apt update
[root@k8s-master01 ~]# apt install -y kubelet kubeadm kubectl
#注意:先不要启动,只是设置开机自启动
[root@k8s-master01 ~]# systemctl enable kubelet
#确定kubeadm等程序文件的版本
[root@k8s-master01 ~]# kubeadm version
kubeadm version: &version.InfoMajor:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:55:36Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"
5、整合kubelet和cri-dockerd
5-1、配置cri-dockerd
#所有节点执行:
[root@k8s-master01 ~]# vim /usr/lib/systemd/system/cri-docker.service
#ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd://
ExecStart=/usr/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.8 --container-runtime-endpoint fd:// --network-plugin=cni --cni-bin-dir=/opt/cni/bin --cni-cache-dir=/var/lib/cni/cache --cni-conf-dir=/etc/cni/net.d
#说明:
需要添加的各配置参数(各参数的值要与系统部署的CNI插件的实际路径相对应):
--network-plugin:指定网络插件规范的类型,这里要使用CNI;
--cni-bin-dir:指定CNI插件二进制程序文件的搜索目录;
--cni-cache-dir:CNI插件使用的缓存目录;
--cni-conf-dir:CNI插件加载配置文件的目录;
配置完成后,重载并重启cri-docker.service服务。
[root@k8s-master01 ~]# systemctl daemon-reload && systemctl restart cri-docker.service
[root@k8s-master01 ~]# systemctl status cri-docker
5-2、配置kubelet
#所有节点执行:
#配置kubelet,为其指定cri-dockerd在本地打开的Unix Sock文件的路径,该路径一般默认为“/run/cri-dockerd.sock“
[root@k8s-master01 ~]# mkdir /etc/sysconfig
[root@k8s-master01 ~]# vim /etc/sysconfig/kubelet
KUBELET_KUBEADM_ARGS="--container-runtime=remote --container-runtime-endpoint=/run/cri-dockerd.sock"
[root@k8s-master01 ~]# cat /etc/sysconfig/kubelet
KUBELET_KUBEADM_ARGS="--container-runtime=remote --container-runtime-endpoint=/run/cri-dockerd.sock"
#说明:该配置也可不进行,而是直接在后面的各kubeadm命令上使用“--cri-socket unix:///run/cri-dockerd.sock”选项
6、初始化第一个主节点
#第一个主节点执行:
#列出k8s所需要的镜像
[root@k8s-master01 ~]# kubeadm config images list
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/pause:3.8
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
#使用阿里云拉取所需镜像
[root@k8s-master01 ~]# kubeadm config images pull --image-repository=registry.aliyuncs.com/google_containers --cri-socket unix:///run/cri-dockerd.sock
[root@k8s-master01 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-apiserver v1.25.3 0346dbd74bcb 3 weeks ago 128MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.25.3 6d23ec0e8b87 3 weeks ago 50.6MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.25.3 603999231275 3 weeks ago 117MB
registry.aliyuncs.com/google_containers/kube-proxy v1.25.3 beaaf00edd38 3 weeks ago 61.7MB
registry.aliyuncs.com/google_containers/pause 3.8 4873874c08ef 4 months ago 711kB
registry.aliyuncs.com/google_containers/etcd 3.5.4-0 a8a176a5d5d6 5 months ago 300MB
registry.aliyuncs.com/google_containers/coredns v1.9.3 5185b96f0bec 5 months ago 48.8MB
[root@k8s-master01 ~]# kubeadm init --control-plane-endpoint="kubeapi.wang.org" --kubernetes-version=v1.25.3 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --token-ttl=0 --cri-socket unix:///run/cri-dockerd.sock --upload-certs --image-repository registry.aliyuncs.com/google_containers
#如提示以下信息,代表初始化完成,请记录信息,以便后续使用:
.....
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
kubeadm join kubeapi.wang.org:6443 --token ef0zsq.srn20wj0qqmbf0zf \\
--discovery-token-ca-cert-hash sha256:5c62350ec29e14de0b621ec4b485fe66f2bce33e9a75e48c662e497f26ef3c3a \\
--control-plane --certificate-key 3edf25d1328f195344b99dac624533a4b46c9de0d1ef3194c491d8733c7f0a1d
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join kubeapi.wang.org:6443 --token ef0zsq.srn20wj0qqmbf0zf \\
--discovery-token-ca-cert-hash sha256:5c62350ec29e14de0b621ec4b485fe66f2bce33e9a75e48c662e497f26ef3c3a
[root@k8s-master01 ~]# mkdir -p $HOME/.kube
[root@k8s-master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
7、其他节点加入集群
#k8s-master02、k8s-master03执行:
#k8s-master02、k8s-master03加入集群:
[root@k8s-master03 ~]#kubeadm join kubeapi.wang.org:6443 --token ef0zsq.srn20wj0qqmbf0zf --discovery-token-ca-cert-hash sha256:5c62350ec29e14de0b621ec4b485fe66f2bce33e9a75e48c662e497f26ef3c3a --control-plane --certificate-key 3edf25d1328f195344b99dac624533a4b46c9de0d1ef3194c491d8733c7f0a1d --cri-socket unix:///run/cri-dockerd.sock
#k8s-node01、k8s-node02、k8s-node03执行:
#k8s-node01、k8s-node02、k8s-node03加入集群:
[root@k8s-node01 ~]#kubeadm join kubeapi.wang.org:6443 --token ef0zsq.srn20wj0qqmbf0zf --discovery-token-ca-cert-hash sha256:5c62350ec29e14de0b621ec4b485fe66f2bce33e9a75e48c662e497f26ef3c3a --cri-socket unix:///run/cri-dockerd.sock
8、部署calico
#第一个主节点执行:
[root@k8s-master01 ~]#apt install zip unzip -y
[root@k8s-master01 ~]#unzip calico-3.24.4.zip
[root@k8s-master01 ~]#cd calico-3.24.4/manifests/
[root@k8s-master01 ~]#kubectl apply -f calico.yaml
[root@k8s-master01 ~]#kubectl get nodes
[root@k8s-master01 ~]#kubectl get pod -A
至此,使用kubeadm部署集群完毕!
以上是关于使用kubeadm+calico部署kubernetes v1.25.3的主要内容,如果未能解决你的问题,请参考以下文章