k8s容器-运维管理篇

Posted yangsirs

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了k8s容器-运维管理篇相关的知识,希望对你有一定的参考价值。

二. 运维和管理

维护参考网址

https://jimmysong.io/kubernetes-handbook/practice/install-kubernetes-on-centos.html

1. node管理

禁止pod调度到该节点上    
kubectl cordon <node>

驱逐该节点上的所有pod
kubectl drain <node>

允许调度新的pod到该节点
kubectl uncordon <node>

附:该命令会删除该节点上的所有Pod(DaemonSet除外),在其他node上重新启动它们,通常该节点需要维护时使用该命令。直接使用该命令会自动调用命令。
   当该节点维护完成,启动了kubelet后,再使用即可将该节点添加到kubernetes集群中
-------------------------------------------------------------------------------------------------------------------------
eg: 
    列出集群里面所有的节点:
    kubectl get nodes

    告知 Kubernetes 移除节点:     
    kubectl drain <node name>

    执行完成后,如果没有任何错误返回,您可以关闭节点(如果是在云平台上,可以删除支持该节点的虚拟机)。如果在维护操作期间想要将节点留在集群,
    那么您需要运行下面命令:
    kubectl uncordon <node name>
    然后,它将告知 Kubernetes 允许调度新的 pod 到该节点

2. deployment控制器的创建

  • 简单的nginx应用可以定义为
  • cat nginx.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
  • 扩缩pod数量
kubectl scale deployment nginx-deployment --replicas 6
  • 如果集群支持horizo??ntal pod autoscaling 的话,还可以为Deployment设置自动扩展:
kubectl autoscale deployment nginx-deployment --min=10 --max=15 --cpu-percent=80
  • 更新镜像
kubectl set image deployment/nginx-deployment  nginx=nginx:1.14.2

附: 第一个deployment:代表的是deployment控制器
    第二个nginx-deployment: 代表的是控制器的名字
    "="号前:代表的是容器的名字
    "="号后:代表的是更新后的镜像名
  • 回滚镜像
kubectl rollout undo deployment/nginx-deployment
  • 查看回滚状态
kubectl rollout status deployment/nginx
kubectl get deployments
  • 我们通过执行kubectl get rs可以看到Deployment更新了Pod,通过创建一个新的ReplicaSet并扩容了3个replica,同时将原来的ReplicaSet缩容到了0个replica
# kubectl get rs
NAME               DESIRED   CURRENT   READY   AGE
nginx-68ccc6f75f   0         0         0       34m
nginx-755464dd6c   3         3         3       2d3h
  • 下次更新这些pod 的时候,只需要更新Deployment 中的pod 的template 即可。
Deployment 可以保证在升级时只有一定数量的Pod 是down 的。默认的,它会确保至少有比期望的Pod数量少一个是up状态(最多一个不可用)。

Deployment 同时也可以确保只创建出超过期望数量的一定数量的Pod。默认的,它会确保最多比期望的Pod数量多一个的Pod 是up 的(最多1个surge )

附:升级时,pod是逐个升级,不会出现大批量的pod不可用的状态现象

3. kubectl工具的使用

  • 3.1 创建
    kubectl run nginx --replicas=3 --labels="app=nginx-example" --image=nginx:1.17.4 --port=80

  • **3.2 查看 ** kubectl get deploy
    kubectl get pods --show-labels
    kubectl get pods -l app=example
    kubectl get pods -o wide

  • 3.3 发布
    kubectl expose deployment nginx --port=88 --type=NodePort --tartget-port=80 --name=nginx-service
    kubectl describe service nginx-service

  • 3.4 故障排查
    kubectl describe TYPE NAME_PREFIX
    kubectl logs nginx-xxx
    kubectl exec -it nginx-xxx bash

  • 3.5 更新
    kubectl set image deployment/nginx nginx=nginx:1.17.4

    kubectl edit deployment/nginx

  • 3.6 资源发布管理
    kubectl rollout status deployment/nginx
    kubectl rollout history deployment/nginx
    kubectl rollout history deployment/nginx --revision=3 kubectl scale deployment nginx --replicas=10

  • 3.7 回滚
    kubectl rollout undo deployment/nginx-deployment
    kubectl rollout undo deployment/nginx-deployment --to-revision=3

  • 3.8 删除
    kubectl delete deploy/nginx
    kubectl delete svc/nginx-service

  • 3.9 写yaml文件用到的api
    定义配置时,指定最新稳定版API(当前为v1)
    kubectl api-versions

4. web服务的deployment和service文件的编排

  • 4.1 nginx的deployment文件的编排
cat > nginx-deployment.yaml << EOF 
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.10
        ports:
        - containerPort: 80
EOF
  • 4.2 nginx的service文件的编排
cat > nginx-service.yaml << EOF
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels:
    app: nginx
spec:
  ports:
  - port: 88
    targetPort: 80
  selector:
    app: nginx
EOF
  • 4.3 执行创建
kubectl  create  -f  nginx-deployment.yaml
kubectl  create  -f  nginx-service.yaml

5. pod的基本管理

  • 创建/查询/更新/删除
  • 资源限制
  • 调度约束
  • 重启策略
  • 健康检查
  • 问题定位
deployment 控制器控制pod的创建更新等操作
  • 5.1 创建pod对象:
cat > pod.yaml  << EOF  
apiVersion: v1  
kind: Pod  
metadata:  
  name: nginx-pod  
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.14
EOF
  • 5.2 创建pod资源
    kubectl create -f pod.yaml

  • 5.3 查看pod
    kubectl get pod [nginx-pod]

  • 5.4 查看pod的详细描述信息
    kubectl describe pod nginx-pod

  • 5.5 更新资源
    kubectl apply -f pod.yaml

  • 5.6 删除资源
    注:这种删除方法和直接指定类型删除的效果是一样的
    kubectl delete -f pod.yaml

6. pod资源限制

  • cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.14
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"

7. pod 调度约束和与重启策略

7.1 调度约束

Pod.spec.nodeName: 强制约束Pod调度到指定Node节点上 Pod.spec.nodeSelector: 通过lable-selector机制选择节点

  • 验证方法:
  • 1) 在master端,先给某个节点指定一个标签
  • 2) 修改pod.yaml文件,并将标签配置到将要创建的pod配置中
  • 3) 通过pod.yaml文件匹配带有指定标签的节点,进行分配资源;如果不指定资源的话,会平均分配所有节点资源,进行调度
  • 4) 如上,将pod创建在指定的数据节点上
  • 创建指定数据节点的标签
kubectl  label node  192.168.10.22 env_role=dev
  • 查看并确认指定标签
kubectl  describe  node  192.168.10.22
  • 配置pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod2
  labels:
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.14
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
  nodeSelector:
    env_role: dev
  • 创建新pod
kubectl  create -f  pod.yaml
  • 查看新pod所在节点
# kubectl   get pod  -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP            NODE            NOMINATED NODE   READINESS GATES
nginx-deployment-5694557fbc-5jhxh   1/1     Running   1          26h   172.50.32.4   192.168.10.24   <none>           <none>
nginx-deployment-5694557fbc-bdtd4   1/1     Running   1          26h   172.50.36.2   192.168.10.23   <none>           <none>
nginx-deployment-5694557fbc-gkr9x   1/1     Running   1          26h   172.50.26.2   192.168.10.22   <none>           <none>
nginx-pod                           1/1     Running   0          26m   172.50.26.3   192.168.10.22   <none>           <none>
nginx-pod2                          1/1     Running   0          3s    172.50.26.4   192.168.10.22   <none>           <none>

如上,此标签的约束起作用了。

7.2 重启策略

  • 三种重启策略
    Always: 当容器停止,总是重建容器,默认策略。
    OnFailure: 当容器异常退出(退出状态码非0)时,才重启容器。
    Never: 当容器终止退出,从不重启容器。

eg:

  • cat pod.yaml
apiVersion: v1 
kind: Pod
metadata:
  name: nginx-pod2
  labels: 
    app: nginx
spec:
  containers:
  - name: nginx
    image: nginx:1.14
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
  restartPolicy: OnFailure

8. 健康检查

提供Probe机制,有两种类型:

  • livenessProbe
    如果检查失败,将杀死容器,根据Pod的restartPolicy来操作

  • readinessProbe
    若果检查失败,Kubernetes 会把Pod从service endpoints中剔除。

Probe支持三种检查方法:

  • httpGet
    发送HTTP请求,返回200-400范围状态码为成功

  • exec 执行shell命令返回状态码是0为成功

  • tcpSocket 发起TCP Socket 建立成功。

eg:

  • cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:
    app: nginx
spec:
  restartPolicy: OnFailure
  containers:
  - name: nginx
    image: nginx:1.14
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
    ports:
    - containerPort: 80
    livenessProbe:
      httpGet:
        path: /index.html
        port: 80


如上配置,一旦网站首页进行检查出状态码不在200-400之间,则会杀死容器,重新启动一个新的容器

9. service代理模式与负载均衡

9.1 service

代理模式,目前应用较为广泛的是基于iptables的转发 未来1.8+版本以后将会用基于ipvs内核的转发

9.2 负载均衡代理

服务代理

  • cat service.yaml
apiVersion: v1                      
kind: Service                       
metadata:                           
  name: my-service                  
spec:                               
  selector:                         
    app: MyApp                      
  ports:                            
  - name: http                      
    protocol: TCP                   
    port: 80                        
    targetPort: 80                  
  - name: https                     
    protocol: TCP                   
    port: 443                       
    targetPort: 443                 
  • 创建并查看service
[root@k8s-master pod]# kubectl  create -f  service.yaml
[root@k8s-master pod]# kubectl  get svc
NAME            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
kubernetes      ClusterIP   10.10.10.1     <none>        443/TCP          31d
my-service      ClusterIP   10.10.10.234   <none>        80/TCP,443/TCP   14h
nginx-service   ClusterIP   10.10.10.61    <none>        88/TCP           42h
  • 编辑my-service服务
    修改里面的服务代理标签 selector,进行代理

  • kubectl edit svc/my-service
    如下内容是会自动根据已执行创建的service加载出来,无需手动添加,修改即可

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2019-10-24T10:50:44Z"
  name: my-service
  namespace: default
  resourceVersion: "270677"
  selfLink: /api/v1/namespaces/default/services/my-service
  uid: 21553546-f64c-11e9-b55f-000c2960f61c
spec:
  clusterIP: 10.10.10.234
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
  selector:
    app: nginx
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
  • 查看服务代理的对应的后端节点
[root@k8s-master pod]# kubectl   get  endpoints  my-service
NAME         ENDPOINTS                                                  AGE
my-service   172.50.26.2:80,172.50.26.3:80,172.50.32.4:80 + 5 more...   15h
[root@k8s-master pod]# kubectl   get  ep  my-service
NAME         ENDPOINTS                                                  AGE
my-service   172.50.26.2:80,172.50.26.3:80,172.50.32.4:80 + 5 more...   15h
  • 访问:
可以在node的任意节点上,访问  curl -I 10.10.10.234:80

10. service服务发现与DNS

10.1 service服务发现

  • 服务发现支持Service环境变量和DNS两种模式:
  • 环境变量
当一个pod运行到Node,kubelet会为每个容器添加一组环境变量,Pod容器中程序就可以使用这些环境变量发现service。  
环境变量名格式如下:  
{SVCNAME}_SERVICE_HOST
{SVCNAME}_SERVICE_PORT
其中服务名和端口名转为大写,连字符转换为下划线

限制:
1) Pod和Service的创建顺序是有要求的,Service必须在Pod创建之前被创建,否则环境变量不会设置到Pod中。
2) Pod只能获取同Namespace中的Service环境变量
  • DNS
DNS服务监视Kubernetes API, 为每一个Service创建DNS记录用于域名解析。 这样Pod中就可以通过DNS域名获取Service的访问地址。 

11. service 发布 服务

11.1 访问方式

直接访问service暴露的IP+端口

11.2 三种服务类型

1) Cluster IP
分配一个内部集群地址,只能在集群内部访问(同Namespace内的Pod),默认ServiceType

2) NodePort 
分配一个内部集群地址,并在每个节点上启用一个端口来暴露服务,可以在集群外部访问。
访问地址:<NodeIP>:<NodePort>

3) LoadBalancer
分配一个内部就去哪IP地址,并在每个节点上启用一个端口来暴露服务。
除此之外,kubernetes会请求底层云平台上的负载均衡器,将每个Node([NodeIP]:[NodePort])作为后端添加进去。 
  • eg:
注意 type值的使用
# cat  nginx-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  labels:
    app: nginx
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - port: 80
    targetPort: 80

12. Ingress发布服务

  • 先访问的是ingress---> service
  • 容器的命名空间以及ConfigMap的配置
# cat  configmap.yaml

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    myapp: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    myapp: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    myapp: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    myapp: ingress-nginx

---
  • 容器角色,权限的相关配置
# cat  rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    myapp: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    myapp: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    myapp: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-rolebinding
  namespace: ingress-nginx
  labels:
    myapp: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrolebinding
  labels:
    myapp: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx    
---
  • ingress 的控制器配置
  • 注意,镜像的修改
# cat  ingress-controller.yaml 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    myapp: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      myapp: ingress-nginx
  template:
    metadata:
      labels:
        myapp: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          #ingress-控制器的镜像地址
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

---
  • ingress发布的pod以及规则整合配置文件如下:
# cat  ingress-pod.yaml 

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-nginx
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
  template:
    metadata:
      labels:
        app: myapp
    spec:
      containers:
        - name: myapp-nginx
          image: nginx:1.7.9
          ports:
            - name: http
              containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: myapp-nginx
  namespace: default
spec:
  type: NodePort
  selector:
    app: myapp
  ports:
    - name: http
      port: 80
      targetPort: 80
      nodePort: 30000

---

到此,创建执行文件后,通过各个<NodeIP>:<NodePort> 方式进行调用访问
  • 配置ingress域名访问
# cat  ingress-nginx.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-nginx
  namespace: default
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: nginx.k8s.com
    http:
      paths:
      - path:
        backend:
          serviceName: myapp-nginx
          servicePort: 80
---

此配置文件,是通过ingress--->Service 模式进行访问
  • 运行ingress所有的配置文
cd  ingress ; kubectl  apply  -f .
  • 访问任意一个地址均可访问
http://192.168.10.22:30000
http://192.168.10.23:30000
http://192.168.10.24:30000
  • 若单独启用一个nginx作为前端代理,调用每个数据节点暴露的端口,也可使用
  • 这样的话,每个数据节点就可以走内网进行调用了。
# cat  /etc/nginx/conf.d/test.k8s.com.conf 
upstream k8s_nginx {
    server 192.168.10.22:30000 weight=2;
    server 192.168.10.23:30000 weight=2;
    server 192.168.10.24:30000 weight=2;
}

server {
    listen 80;
    server_name test.k8s.com;
    index       index.html index.htm index.php;
    access_log  /var/log/nginx/a.ccess.log main;
    error_log   /var/log/nginx/error.log; 
    location / {
        proxy_pass http://k8s_nginx;
        proxy_set_header Host $host:$server_port;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header REMOTE-HOST $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}
  • 访问 前端域名即可
http://test.k8s.com

未完,待续 ^_^

以上是关于k8s容器-运维管理篇的主要内容,如果未能解决你的问题,请参考以下文章

Rancher 运维 - 从零开始学习 | RKE部署K8S | 容器管理

Rancher 运维 - 从零开始学习 | RKE部署K8S | 容器管理

Rancher 运维 - 从零开始学习 | RKE部署K8S | 容器管理

运维集合

Kubernetes/K8s CKA 认证实战班(K8s运维工程师) 李振良

Kubernetes/K8s CKA 认证实战班(K8s运维工程师) 李振良