Docker&K8s---通过kubeadm快速部署K8s

Posted 大聪明Smart

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Docker&K8s---通过kubeadm快速部署K8s相关的知识,希望对你有一定的参考价值。

Docker&K8s—通过kubeadm快速部署K8s

环境准备

三台虚拟机Centos8,选择最小安装,勾选网络工具、系统工具和标准安装即可。

最低配置要求:2C/2G/20G

IP: 192.168.12.10 Master 192.168.12.11 node1 192.168.12.12 node2

环境初始化

虚拟机网络模式选nat模式,

windows配置vmnet8的ipv4

# ip
192.168.12.1
# 掩码
255.255.255.0
# 首选DNS
192.168.12.254

虚拟机netvm8设置

# IP
192.168.12.0
# 掩码
255.255.255.0
# 网关
192.168.12.254

Centos8网络ip配置、主机名、epel源、常用工具安装

三台机器全部执行

# 关闭selinux
~]# vi /etc/selinux/config
SELINUX=disabled
# 设置网络ip
~]# vim /etc/sysconfig/network-scripts/ifcfg-ens33
ONBOOT=yes  # 原有内容上修改
BOOTPROTO=static  # 原有内容上修改
# 下边内容在后边追加
IPADDR=192.168.12.10   # 11 12
NETMASK=255.255.255.0
GATEWAY=192.168.12.254
DNS1=192.168.12.254
~]# systemctl restart NetworkManager
~]# reboot
~]# getenforce
Disabled
~]# ping baidu.com
# 设置主机名
~]# hostnamectl set-hostname k8s-master
~]# hostnamectl set-hostname k8s-node1
~]# hostnamectl set-hostname k8s-node2
# 关闭swap分区
~]# swapoff -a # 临时
~]# sed -i '/ swap / s/^\\(.*\\)$/#\\1/g' /etc/fstab #永久
# 三台机器上执行
~]# cat >> /etc/hosts << EOF192.168.12.10 k8s-master192.168.12.11 k8s-node01192.168.12.12 k8s-node02EOF
# 重启
~]# reboot
~]# systemctl stop firewalld
~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-8.repo
~]# yum clean all
~]# yum makecache
~]# yum install wget net-tools telnet tree nmap sysstat lrzsz dos2unix bind-utils -y

# 桥接ipv4
cat > /etc/sysctl.d/k8s.conf << EOF 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF 
# 加载
sysctl --system

开始安装

安装docker

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum install docker-ce -y
daemon.json
{
	"graph": "/data/docker",
	"storage-driver": "overlay2",
	"insecure-registries": ["registry.access.redhat.com", "quay.io"],
	"registry-mirrors": ["https://q2gr04ke.mirror.aliyuncs.com/"],
    "bip": "172.12.10.1/24",  # 后边分配的ip要对应,这个ip设置最好中间两段取本机ip的最后两段
	"exec-opts": ["native.cgroupdriver=systemd"],
	"live-restore":true
}

systemctl start docker
systemctl enable docker


# 卸载docker,备用
yum remove docker-ce.x86_64 ddocker-ce-cli.x86_64 -y
rm -rf /var/lib/docker

添加kubernetes的yum软件源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubeadm,kubelet和kubectl

三台机器上执行,这里指定了版本v1.15.0

yum install -y kubelet-1.15.0 kubeadm-1.15.0 kubectl-1.15.0
systemctl enable kubelet

部署Kubernetes Master

在Master 节点执行,这里的apiserve需要修改成自己的master地址

[root@k8s-master ~]# kubeadm init \\
--apiserver-advertise-address=192.168.12.10 \\
--image-repository registry.aliyuncs.com/google_containers \\
--kubernetes-version v1.15.0 \\
--service-cidr=10.1.0.0/16 \\
--pod-network-cidr=172.12.0.0/16



To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.12.10:6443 --token p6hvb3.5sln5g4k32wcrvn2 \\
    --discovery-token-ca-cert-hash sha256:4d96240030c015b2e146c5ee2e4db4a40b2ff5bd55040b2768388a052d6c3613 

# 下边按照提示执行即可
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

如果提示镜像拉取超时,则选择国内的源先docker拉取,然后打上对应的标签

kubeadm config images list   # 查看镜像的标签版本


# 拉去国内相应的版本
sudo docker pull registry.cn-beijing.aliyuncs.com/imcto/kube-apiserver:v1.21.2
sudo docker pull registry.cn-beijing.aliyuncs.com/imcto/kube-controller-manager:v1.13.1
sudo docker pull registry.cn-beijing.aliyuncs.com/imcto/kube-proxy:v1.13.1
sudo docker pull registry.cn-beijing.aliyuncs.com/imcto/kube-scheduler:v1.13.1
sudo docker pull registry.cn-beijing.aliyuncs.com/imcto/etcd:3.2.24
sudo docker pull registry.cn-beijing.aliyuncs.com/imcto/pause:3.1
sudo docker pull registry.cn-beijing.aliyuncs.com/imcto/coredns:1.2.6


# 打标签,和kubeadm config查看的标签要一致
sudo docker tag registry.cn-beijing.aliyuncs.com/imcto/kube-controller-manager:v1.13.1 k8s.gcr.io/kube-controller-manager:v1.13.1
sudo docker tag registry.cn-beijing.aliyuncs.com/imcto/kube-apiserver:v1.13.1 k8s.gcr.io/kube-apiserver:v1.13.1
sudo docker tag registry.cn-beijing.aliyuncs.com/imcto/kube-proxy:v1.13.1 k8s.gcr.io/kube-proxy:v1.13.1
sudo docker tag registry.cn-beijing.aliyuncs.com/imcto/kube-scheduler:v1.13.1 k8s.gcr.io/kube-scheduler:v1.13.1
sudo docker tag registry.cn-beijing.aliyuncs.com/imcto/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
sudo docker tag registry.cn-beijing.aliyuncs.com/imcto/pause:3.1 k8s.gcr.io/pause:3.1
sudo docker tag registry.cn-beijing.aliyuncs.com/imcto/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6

节点加入集群

在两台node上执行

[root@k8s-node01 ~]# kubeadm join 192.168.12.10:6443 --token p6hvb3.5sln5g4k32wcrvn2 \\    --discovery-token-ca-cert-hash sha256:4d96240030c015b2e146c5ee2e4db4a40b2ff5bd55040b2768388a052d6c3613 

安装网络插件

安装网络插件:

# 记得修改你的cidr和模式#  net-conf.json: |#    {#      "Network": "172.12.0.0/16",#      "Backend": {#        "Type": "host-gw"#      }#    }# github老是访问不了,大家懂的。。。。自己做了个镜像kubectl apply -f http://mirrors.liboer.top/kube-flannel.yaml

安装flannel时会报错,查了n久,各种版本都尝试了,最后发现是我的centos的iptable_nat的问题。。。。。。。

解决方法:

三台机器全部执行

~]# modinfo iptable_natfilename:       /lib/modules/4.18.0-305.3.1.el8.x86_64/kernel/net/ipv4/netfilter/iptable_nat.ko.xzlicense:        GPLrhelversion:    8.4srcversion:     98725EFA1CB8A67AC0BE0BDdepends:        ip_tables,nf_natintree:         Yname:           iptable_natvermagic:       4.18.0-305.3.1.el8.x86_64 SMP mod_unload modversions sig_id:         PKCS#7signer:         CentOS kernel signing keysig_key:        1B:76:0B:00:B4:46:42:E5:5A:5D:E3:52:84:E5:35:67:94:50:0B:72sig_hashalgo:   sha256signature:      9A:02:50:27:3E:CF:F1:48:E8:18:E8:2E:43:6A:54:EF:6D:1C:80:8B:		A9:9D:51:59:31:80:F3:D1:5F:90:A9:80:AC:63:EC:34:6D:3A:66:8C:		69:FF:21:A5:B3:68:F9:F7:37:7E:31:41:42:E1:78:2F:1F:E8:91:5B:		65:6C:AD:FE:75:38:29:31:0B:81:36:C9:D9:0C:3C:40:13:9E:D1:2D:		46:23:A6:36:27:F7:29:08:25:0A:6A:86:81:9F:27:69:1E:3E:FD:EC:		F5:EF:69:57:E9:4B:46:EE:1F:D9:69:B0:E2:8A:E5:6D:59:E7:19:67:		1A:F7:7D:9C:59:0E:80:FA:86:DA:93:64:45:83:47:2C:A1:A4:0F:7B:		FB:BF:E7:04:05:4B:5D:8B:F2:F4:CA:5E:2A:20:E4:8E:70:F0:B2:63:		BF:8A:F0:9D:BE:41:5E:B1:E8:65:EE:C6:B4:DD:AE:91:F7:62:B2:E1:		F2:39:7F:DA:E8:C8:C3:81:36:B0:64:81:ED:E5:B2:BA:A9:F8:EC:C6:		E2:34:13:DA:09:22:14:45:F0:87:03:13:BB:56:09:66:F4:48:3B:7F:		39:FF:F8:29:84:58:1C:0A:6B:37:34:0E:3E:CB:9E:DF:78:E0:7D:AC:		F0:38:11:2B:C2:C7:A4:C8:01:2F:A3:9B:31:DC:C5:16:C2:44:B3:80:		5D:A9:52:14:AE:2F:E9:F1:22:BA:AE:93:2E:9D:DB:3D:49:3A:53:59:		CA:E7:97:BA:61:47:9E:36:C5:FA:B5:E1:BE:6F:1E:58:D6:55:78:FE:		B2:28:B0:54:A5:B6:E5:4B:01:3F:5F:F4:87:E4:6B:2F:5C:69:8F:51:		C1:CE:D2:D2:D1:B6:C0:FA:26:9F:1F:D4:F8:BE:B6:CD:30:21:C8:AE:		DB:C6:43:DC:14:44:DA:67:12:D9:8F:05:EA:C3:A3:70:82:3E:B5:7A:		C8:89:38:61:42:FE:B6:AE:61:45:02:37:28:16:C4:DC:6B:A7:F7:59:		D0:E3:C1:02~]# insmod /lib/modules/4.18.0-305.3.1.el8.x86_64/kernel/net/ipv4/netfilter/iptable_nat.ko.xzinsmod: ERROR: could not insert module /lib/modules/4.18.0-305.3.1.el8.x86_64/kernel/net/ipv4/netfilter/iptable_nat.ko.xz: Unknown symbol in module~]# modprobe iptable_nat

查看状态,全是running即可。之前没有安装cni插件时node的状态时NotReady,coredns-bccdc95cf-cgj2m这俩pod是pending状态,装上就会ready和running

[root@k8s-master ~]# kubectl get pod -n kube-systemNAME                                 READY   STATUS    RESTARTS   AGEcoredns-bccdc95cf-cgj2m              1/1     Running   0          19mcoredns-bccdc95cf-shkmr              1/1     Running   0          19metcd-k8s-master                      1/1     Running   0          19mkube-apiserver-k8s-master            1/1     Running   0          18mkube-controller-manager-k8s-master   1/1     Running   0          18mkube-flannel-ds-7dmd6                1/1     Running   0          30skube-flannel-ds-gdnbw                1/1     Running   0          30skube-flannel-ds-x72ts                1/1     Running   0          30skube-proxy-kd79h                     1/1     Running   0          19mkube-proxy-mh2cn                     1/1     Running   0          18mkube-proxy-z58qt                     1/1     Running   0          18mkube-scheduler-k8s-master            1/1     Running   0          18m

检查一下:

[root@k8s-master ~]# kubectl get csNAME                 STATUS    MESSAGE             ERRORscheduler            Healthy   ok                  controller-manager   Healthy   ok                  etcd-0               Healthy   {"health":"true"}   [root@k8s-master ~]# kubectl get nodeNAME         STATUS   ROLES    AGE   VERSIONk8s-master   Ready    master   55m   v1.15.0k8s-node01   Ready    <none>   53m   v1.15.0k8s-node02   Ready    <none>   53m   v1.15.0

测试Kubernetes集群

在Kubernetes集群中创建一个pod,然后暴露端口,验证是否正常访问:

[root@k8s-master ~]# kubectl create deployment nginx --image=nginxdeployment.apps/nginx created[root@k8s-master ~]# kubectl get podNAME                     READY   STATUS    RESTARTS   AGEnginx-554b9c67f9-jbch5   1/1     Running   0          2m26s# 如果出问题可以查看详情或者日志kubectl describe pod nginx-554b9c67f9-jbch5  # 详情kubectl logs nginx-554b9c67f9-jbch5 -n namespace  # default可以不写后边的-n[root@k8s-master ~]# kubectl expose deployment nginx --port=80 --type=NodePortservice/nginx exposedkubectl expose deployment polaris-dashboard --port=80 --type=NodePort[root@k8s-master ~]# kubectl get pods,svcNAME                         READY   STATUS    RESTARTS   AGEpod/nginx-554b9c67f9-jbch5   1/1     Running   0          14mNAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGEservice/kubernetes   ClusterIP   10.1.0.1      <none>        443/TCP        52mservice/nginx        NodePort    10.1.132.56   <none>        80:30824/TCP   9m48shttp://192.168.12.12:30824# 如果浏览器不能访问,执行下边这句话(三台机器),这是因为新版的docker对iptables做了改动iptables -P FORWARD ACCEPT[root@k8s-master ~]# kubectl get pod -o wideNAME                     READY   STATUS    RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATESnginx-554b9c67f9-jbch5   1/1     Running   0          28m   172.12.1.2   k8s-node02   <none>           <none># 直接curl集群内部地址也能看到nginxcurl 172.12.1.2

访问地址:http://NodeIP:Port ,此例就是

http://192.168.12.10:30824 http://192.168.12.11:30824 http://192.168.12.12:30824 任何一个都能访问到

img

在从节点上也可以使用kubectl

在从节点上使用kubectl:

[root@k8s-node01 ~]# kubectl get podThe connection to the server localhost:8080 was refused - did you specify the right host or port?

解决:

将master节点上的/etc/kubernetes/admin.conf复制到从节点相同目录下

[root@k8s-node01 kubernetes]# scp k8s-master:/etc/kubernetes/admin.conf .

然后配置环境变量:

[root@k8s-node01 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile[root@k8s-node01 kubernetes]# source ~/.bash_profile[root@k8s-node01 kubernetes]# kubectl get podNAME                     READY   STATUS    RESTARTS   AGEnginx-554b9c67f9-2w6bf   1/1     Running   0          4d19h

其他错误

虚拟机挂起后再打开,coredns的俩pod莫名挂掉。。。。

# 重启docker# 直接强制删除重新拉取kubectl delete pod coredns-bccdc95cf-shkmr  --grace-period=0 --force -n kube-system

卸载

希望你不会用到

出问题了,可以直接卸载掉重新按流程安装

kubeadm reset -fmodprobe -r ipiplsmodrm -rf ~/.kube/rm -rf /etc/kubernetes/rm -rf /etc/systemd/system/kubelet.service.drm -rf /etc/systemd/system/kubelet.servicerm -rf /usr/bin/kube*rm -rf /etc/cnirm -rf /opt/cnirm -rf /var/lib/etcdrm -rf /var/etcdyum clean allyum remove kube*

以上是关于Docker&K8s---通过kubeadm快速部署K8s的主要内容,如果未能解决你的问题,请参考以下文章

使用Kubeadm部署k8s

kubeadm安装高可用k8s集群

Docker&k8s(一)

Kubernetes(k8s)配置管理ConfigMap&Secret

容器技术之Docker&K8S

容器技术之Docker&K8S