三台ECS使用kubeadm快速部署最新版本K8sv1.21.3

Posted 大聪明Smart

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了三台ECS使用kubeadm快速部署最新版本K8sv1.21.3相关的知识,希望对你有一定的参考价值。

阿里云三台ECS使用kubeadm快速部署最新版本K8sv1.21.3

我只能说虚拟机太难用,经常挂掉,一气之下,怒租三台ECS,肉痛。不过倒是省心很多,不需要经常去维护环境 拉。

环境准备

三台阿里云Centos7.2

最低配置要求:2C/4G/20G

弹性网卡也买了吧,还要用公网ip进行xshell连接,不然管理起来太麻烦。

一定要同一个账户下购买同一个地域的ECS,这样能够保证三台机器在同一个内网中。否则不在一个内网中很难组集群(我已经尝试过,公网ip很难组成功)。

该开的端口还是开一下吧,不开好像也没问题,常用的还是打开吧。

80 8080 肯定要开的

环境初始化

Centos7.2初始化环境

三台机器全部执行,后续所有ip全部为内网ip,切记不要用公网ip.

内网ip 用 ip add查看 eth0的ipv4就是

~]# setenforce 0
~]# getenforce
Disabled

# 设置主机名
~]# hostnamectl set-hostname k8s-master
~]# hostnamectl set-hostname k8s-node1
~]# hostnamectl set-hostname k8s-node2
# 关闭swap分区
~]# swapoff -a # 临时
~]# sed -i '/ swap / s/^\\(.*\\)$/#\\1/g' /etc/fstab #永久
# 三台机器上执行
~]# vi /etc/hosts 
master内网ip k8s-master
node01内网ip k8s-node01
node02内网ip k8s-node02
~]# systemctl stop firewalld
~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo
~]# yum clean all
~]# yum makecache
~]# yum install wget net-tools telnet tree nmap sysstat lrzsz dos2unix bind-utils -y
# 桥接ipv4
cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF 
sysctl --system

开始安装

安装docker

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum install docker-ce -y
daemon.json
{
	"graph": "/data/docker",
	"storage-driver": "overlay2",
	"insecure-registries": ["registry.access.redhat.com", "quay.io"],
	"registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com/"],
    "bip": "10.244.10.1/24",  # 后边分配的ip要对应,这个ip设置最好中间两段取本机ip的最后两段
	"exec-opts": ["native.cgroupdriver=systemd"],
	"live-restore":true
}

systemctl start docker
systemctl enable docker


# 卸载docker,备用
yum remove docker-ce.x86_64 ddocker-ce-cli.x86_64 -y
rm -rf /var/lib/docker

添加kubernetes的yum软件源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubeadm,kubelet和kubectl

三台机器上执行,这里指定了版本v1.15.0

yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

部署Kubernetes Master

选择国内的源先docker拉取,然后打上对应的标签。因为默认谷歌镜像根本拉不到的

kubeadm config images list   # 查看镜像的标签版本
[root@k8s-master ~]# kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0

# 拉去国内相应的版本  和kubeadm config查看的标签要一致
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.21.3
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.21.3
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.21.3
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.21.3
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.8.0

# 打标签,和kubeadm config查看的标签要一致
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.21.3 k8s.gcr.io/kube-apiserver:v1.21.3
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.21.3 k8s.gcr.io/kube-controller-manager:v1.21.3
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.21.3 k8s.gcr.io/kube-scheduler:v1.21.3
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.21.3 k8s.gcr.io/kube-proxy:v1.21.3
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1 k8s.gcr.io/pause:3.4.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0 k8s.gcr.io/etcd:3.4.13-0
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.8.0 k8s.gcr.io/coredns/coredns:v1.8.0

在Master 节点执行,这里的apiserve需要修改成自己的master地址

kubeadm init \\
--apiserver-advertise-address=master节点内网ip \\
--service-cidr=10.10.0.0/16 \\
--pod-network-cidr=10.244.0.0/16

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.12.10:6443 --token p6hvb3.5sln5g4k32wcrvn2 \\
    --discovery-token-ca-cert-hash sha256:4d96240030c015b2e146c5ee2e4db4a40b2ff5bd55040b2768388a052d6c3613 


# 下边按照提示执行即可
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

安装网络插件

flannel的镜像也拉的挺慢的,不过好在可以拉到,嫌慢的可以自己做个加速,这里不再做了。

安装网络插件:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

没有安装cni插件时node的状态时NotReady,coredns-bccdc95cf-cgj2m这俩pod是pending状态,装上就会ready和running

节点加入集群

在两台node上执行

[root@k8s-node01 ~]# kubeadm join 192.168.12.10:6443 --token p6hvb3.5sln5g4k32wcrvn2 \\
    --discovery-token-ca-cert-hash sha256:4d96240030c015b2e146c5ee2e4db4a40b2ff5bd55040b2768388a052d6c3613 

加入集群后两台节点也要拉一下镜像,不然也起不来,谷歌的镜像你懂的。

只拉kube-proxy和pause即可

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.21.3
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1


docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.21.3 k8s.gcr.io/kube-proxy:v1.21.3
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.4.1 k8s.gcr.io/pause:3.4.1

查看结果

[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES                  AGE     VERSION
k8s-master   Ready    control-plane,master   48m     v1.21.3
k8s-node01   Ready    <none>                 4m17s   v1.21.3
k8s-node02   Ready    <none>                 46m     v1.21.3

[root@k8s-master ~]# kubectl get pod -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-558bd4d5db-blx27             1/1     Running   0          62m
coredns-558bd4d5db-md5dq             1/1     Running   0          62m
etcd-k8s-master                      1/1     Running   0          62m
kube-apiserver-k8s-master            1/1     Running   0          62m
kube-controller-manager-k8s-master   1/1     Running   0          62m
kube-flannel-ds-69djm                1/1     Running   0          18m
kube-flannel-ds-7nv8s                1/1     Running   0          42m
kube-flannel-ds-nttn4                1/1     Running   0          42m
kube-proxy-mkdwg                     1/1     Running   0          18m
kube-proxy-pxfvw                     1/1     Running   0          62m
kube-proxy-x49br                     1/1     Running   0          60m
kube-scheduler-k8s-master            1/1     Running   0          62m

测试Kubernetes集群

在Kubernetes集群中创建一个pod,然后暴露端口,验证是否正常访问:

[root@k8s-master ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created

[root@k8s-master ~]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-554b9c67f9-jbch5   1/1     Running   0          2m26s
# 如果出问题可以查看详情或者日志
kubectl describe pod nginx-554b9c67f9-jbch5  # 详情
kubectl logs nginx-554b9c67f9-jbch5 -n namespace  # default可以不写后边的-n

[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed

kubectl expose deployment polaris-dashboard --port=80 --type=NodePort
[root@k8s-master ~]# kubectl get pods,svc
NAME                         READY   STATUS    RESTARTS   AGE
pod/nginx-554b9c67f9-jbch5   1/1     Running   0          14m

NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.1.0.1      <none>        443/TCP        52m
service/nginx        NodePort    10.1.132.56   <none>        80:30824/TCP   9m48s

http://192.168.12.12:30824
# 如果浏览器不能访问,执行下边这句话(三台机器),这是因为新版的docker对iptables做了改动
iptables -P FORWARD ACCEPT

[root@k8s-master ~]# kubectl get pod -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
nginx-554b9c67f9-jbch5   1/1     Running   0          28m   172.12.1.2   k8s-node02   <none>           <none>
# 直接curl集群内部地址也能看到nginx
curl 172.12.1.2

访问地址:http://NodeIP:Port ,此例就是

http://192.168.12.10:30824 http://192.168.12.11:30824 http://192.168.12.12:30824 任何一个都能访问到

在从节点上也可以使用kubectl

在从节点上使用kubectl:

[root@k8s-node01 ~]# kubectl get pod
The connection to the server localhost:8080 was refused - did you specify the right host or port?

解决:

将master节点上的/etc/kubernetes/admin.conf复制到从节点相同目录下

[root@k8s-node01 kubernetes]# scp k8s-master:/etc/kubernetes/admin.conf .

然后配置环境变量:

[root@k8s-node01 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@k8s-node01 kubernetes]# source ~/.bash_profile
[root@k8s-node01 kubernetes]# kubectl get pod
NAME                     READY   STATUS    RESTARTS   AGE
nginx-554b9c67f9-2w6bf   1/1     Running   0          4d19h

以上是关于三台ECS使用kubeadm快速部署最新版本K8sv1.21.3的主要内容,如果未能解决你的问题,请参考以下文章

kubeadm快速部署K8S 1.20集群

解决阿里云ECS下kubeadm部署k8s无法指定公网IP

kubeadm构建k8s之Prometheus-operated监控

云计算-使用Kubeadm在阿里云搭建单Master多Node的K8S

使用kubeadm部署K8S v1.17.0集群

使用kubeadm快速启用一个集群