Linux学习-Kubernetes学习之kubernetes安装部署

Posted 丢爸

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Linux学习-Kubernetes学习之kubernetes安装部署相关的知识,希望对你有一定的参考价值。

基于Centos7.5下kubeadm安装和部署kubernetes

使用三台虚拟机进行部署(一台master,两台node)
  1. 虚拟机IP
  • master:192.168.88.101
  • node1:192.168.88.102
  • node2:192.168.88.103
  1. 修改/etc/hosts配置
#在三台虚拟机的/etc/hosts文件中添加主机名解析
192.168.88.101 master
192.168.88.102 node1
192.168.88.103 node2
  1. 禁用交换分区
#注释掉/etc/fstab中的swap这一行
#/dev/mapper/centos-swap swap                    swap    defaults        0 0
  1. 确认三台虚拟机上时间同步
  2. 在/etc/yum.repos.d/目录下添加docker-ce.repo,kubernetes.repo
[root@master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
[root@master ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@master ~]# cat << EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
#复制kubernetes.repo,docker-ce.repo至node1,node2节点
[root@master ~]# scp /etc/yum.repos.d/kubernetes.repo node1:/etc/yum.repos.d/
[root@master ~]# scp /etc/yum.repos.d/kubernetes.repo node2:/etc/yum.repos.d/
[root@master ~]# scp /etc/yum.repos.d/docker-ce.repo node2:/etc/yum.repos.d/
[root@master ~]# scp /etc/yum.repos.d/docker-ce.repo node1:/etc/yum.repos.d/
  1. 安装软件
#------------master,node1,node2------------------------------
#在master节点上安装以下程序
[root@master ~]# yum install -y docker-ce kubelet kubeadm kubectl
#启动docker,并设置开机启动
[root@master ~]# systemctl start docker
[root@master ~]# systemctl enable docker
#设置kubelet开机自启
[root@master ~]# systemctl enable kubelet
#------------master,node1,node2------------------------------
  1. 修改docker的服务启动文件
#------------master,node1,node2------------------------------
#修改bridge-iptables参数值为1(master,node1,node2都需要修改)
[root@master ~]# sysctl -w net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-iptables = 1
[root@master ~]# sysctl -w net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-ip6tables = 1
#------------master,node1,node2------------------------------
  1. kubeadm初使化
#1.生成初始化文件
#4.下载镜像
[root@master ~]# kubeadm init \\
--apiserver-advertise-address=192.168.88.101 \\
--image-repository registry.aliyuncs.com/google_containers \\
--kubernetes-version v1.22.3 \\
--service-cidr=10.96.0.0/12 \\
--pod-network-cidr=10.244.0.0/16 \\
#6.初使化时需要修改docker的配置文件,添加"exec-opts"参数配置,因为docker默认为cgroupfs,会造成kubelet无法启动
[root@master ~]# cat /etc/docker/daemon.json 
{
  "registry-mirrors" : [
  "https://registry.docker-cn.com",
  "https://docker.mirrors.ustc.edu.cn",
  "http://hub-mirror.c.163.com",
  "https://cr.console.aliyun.com/"],
  "live-restore":true,
  "exec-opts":["native.cgroupdriver=systemd"]
}
#执行初使化
[root@master ~]# kubeadm init     --apiserver-advertise-address=192.168.88.101     --image-repository=registry.aliyuncs.com/google_containers     --pod-network-cidr=192.168.0.0/16
#初使化后记录最下面生成信息如下:
#需要在master端创建
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubeadm join 192.168.88.101:6443 --token tygrdl.na7rreiftug222le \\
	--discovery-token-ca-cert-hash sha256:a6f421f6d71be76e02cc38aedf676c86c6af467669bf8f669d9a08c9da38f312
  1. 在两个node节点上安装docker,kubelet,kubeadm
[root@node2 ~]# yum install -y docker-ce kubelet kubeadm kubectl
  1. 在master上查看容器镜像
[root@master ~]# docker image ls
REPOSITORY                                                        TAG       IMAGE ID       CREATED        SIZE
registry.aliyuncs.com/google_containers/kube-apiserver            v1.22.3   53224b502ea4   5 days ago     128MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.22.3   05c905cef780   5 days ago     122MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.22.3   0aa9c7e31d30   5 days ago     52.7MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.22.3   6120bd723dce   5 days ago     104MB
registry.aliyuncs.com/google_containers/etcd                      3.5.0-0   004811815584   4 months ago   295MB
registry.aliyuncs.com/google_containers/coredns                   v1.8.4    8d147537fb7d   5 months ago   47.6MB
registry.aliyuncs.com/google_containers/pause                     3.5       ed210e3e4a5b   7 months ago   683k
#查看组件状态
[root@master ~]# kubectl get componentstatus
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok                              
controller-manager   Healthy   ok                              
etcd-0               Healthy   {"health":"true","reason":""}  
#查看节点信息,下面STATUS显示NotReady因为没有安装网络组件
[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES                  AGE   VERSION
master   NotReady   control-plane,master   15h   v1.22.3
#安装网络组件
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
#网络安装完成后,重新启动kubelet服务和docker服务,再次查看Node状态
[root@master bak]# systemctl restart kubelet
[root@master bak]# systemctl daemon-reload
[root@master bak]# systemctl restart docker
[root@master bak]# kubectl get pods -n kube-system
NAME                             READY   STATUS      RESTARTS        AGE
coredns-7f6cbbb7b8-djrpq         0/1     Completed   0               29h
coredns-7f6cbbb7b8-mp7d8         0/1     Completed   0               29h
etcd-master                      1/1     Running     2 (2m26s ago)   29h
kube-apiserver-master            1/1     Running     2 (2m16s ago)   29h
kube-controller-manager-master   1/1     Running     2 (2m26s ago)   29h
kube-flannel-ds-n4tm4            1/1     Running     1 (2m26s ago)   13h
kube-proxy-2t7bk                 1/1     Running     3 (2m16s ago)   29h
kube-scheduler-master            1/1     Running     2 (2m26s ago)   14m
[root@master bak]# kubectl get node
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   29h   v1.22.3
#查看名称空间
[root@master bak]# kubectl get ns
NAME              STATUS   AGE
default           Active   29h
kube-node-lease   Active   29h
kube-public       Active   29h
kube-system       Active   29h
  1. 在node1,node2节点上执行加入
[root@node1 ~]# kubeadm join 192.168.88.101:6443 --token 55g3ki.c7ysg9iprxmlc3qz --discovery-token-ca-cert-hash sha256:a6f421f6d71be76e02cc38aedf676c86c6af467669bf8f669d9a08c9da38f312 --ignore-preflight-errors="swap"
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
#在master节点上查看加入情况
[root@master bak]# kubectl get nodes
NAME     STATUS   ROLES                  AGE     VERSION
master   Ready    control-plane,master   30h     v1.22.3
node1    Ready    <none>                 12m     v1.22.3
node2    Ready    <none>                 4m15s   v1.22.3
[root@master bak]# kubectl get pods -n kube-system -o wide
NAME                             READY   STATUS      RESTARTS      AGE    IP               NODE     NOMINATED NODE   READINESS GATES
coredns-7f6cbbb7b8-djrpq         0/1     Completed   0             30h    <none>           master   <none>           <none>
coredns-7f6cbbb7b8-mp7d8         0/1     Completed   0             30h    <none>           master   <none>           <none>
etcd-master                      1/1     Running     2 (49m ago)   30h    192.168.88.101   master   <none>           <none>
kube-apiserver-master            1/1     Running     2 (49m ago)   30h    192.168.88.101   master   <none>           <none>
kube-controller-manager-master   1/1     Running     2 (49m ago)   30h    192.168.88.101   master   <none>           <none>
kube-flannel-ds-dfvsb            1/1     Running     1 (13m ago)   18m    192.168.88.102   node1    <none>           <none>
kube-flannel-ds-dv4xm            0/1     Init:1/2    0             114s   192.168.88.103   node2    <none>           <none>
kube-flannel-ds-n4tm4            1/1     Running     1 (49m ago)   14h    192.168.88.101   master   <none>           <none>
kube-proxy-2t7bk                 1/1     Running     3 (49m ago)   30h    192.168.88.101   master   <none>           <none>
kube-proxy-gn9nj                 1/1     Running     1 (13m ago)   18m    192.168.88.102   node1    <none>           <none>
kube-proxy-vz677                 1/1     Running     0             114s   192.168.88.103   node2    <none>           <none>
kube-scheduler-master            1/1     Running     2 (49m ago)   62m    192.168.88.101   master   <none>           <none>


报错1:

[root@master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Healthy     ok                                                                                            
etcd-0               Healthy     {"health":"true","reason":""} 

解决方法:

#注释掉--port这一行,重新启动docker后正常
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-scheduler
    tier: control-plane
  name: kube-scheduler
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-scheduler
    - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
    - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
    - --bind-address=127.0.0.1
    - --kubeconfig=/etc/kubernetes/scheduler.conf
    - --leader-elect=true
   # - --port=0

问题2:

#通过pull quay.io上面的flannel镜像无法下载
[root@master ~]# docker pull quay.io/coreos/flannel:v0.12.0-arm64

解决方法:

#先通过阿里云镜像仓库把镜像pull下来,通过docker tag对镜像进行更改名称,改成quay.io/coreos/flannel:v0.12.0-amd64
[root@master ~]# docker image pull registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-amd64
[root@master ~]# docker image tag  registry.cn-shanghai.aliyuncs.com/leozhanggg/flannel:v0.12.0-amd64 quay.io/coreos/flannel:v0.12.0-amd64

问题3:

#flannel镜像pull下来后无法自动启动为容器

解决方法:

#如kubelet服务启动时配置了flannel,配置完成后,需要重新启动kubelet服务和docker服务

问题4:

[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
	[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
	[ERROR Swap]: running with swap on is not supported. Please disable swap

解决方法:

sysctl -w net.bridge.bridge-nf-call-iptables=1
sysctl -w net.bridge.bridge-nf-call-ip6tables=1
#在/etc/fstab中注释以下行
#/dev/mapper/centos-swap swap                    swap    defaults        0 0
#注释完成后,执行一下mount -a让系统重新挂载一下

问题5:

#执行完kubeadm join 192.168.88.101:6443 --token 55g3ki.c7ysg9iprxmlc3qz --discovery-token-ca-cert-hash sha256:a6f421f6d71be76e02cc38aedf676c86c6af467669bf8f669d9a08c9da38f312 --ignore-preflight-errors="swap"--停止一直没有反应

解决办法:

#有可能是token过期,token是有时间限制的,可以在master机器上重新生成token
[root@master bak]# kubeadm token create --ttl 0
55g3ki.c7ysg9iprxmlc3qz

问题6:

# kubeadm join使用新的token后,出现以下提示
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

解决方法:

#因为kubelet服务没有启动,启动kubelet服务即可

问题7:

#failed to run Kubelet: running with swap on is not supported, please disable swap

解决方法:

#第1种方法
vim /etc/sysconfig/kubelet 
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
#第二种方法
#修改/etc/fstab中,注释以下行
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

以上是关于Linux学习-Kubernetes学习之kubernetes安装部署的主要内容,如果未能解决你的问题,请参考以下文章

Kubernetes深入学习之二:编译和部署镜像(api-server)

容器学习之k8s入门1

Kubernetes 学习之入门篇

Kubernetes 学习之入门篇

Kubernetes学习二:Kubernetes集群搭建之部署kubernetes server

Linux学习之网络基本知识拓展学习