控制pod在节点位置与其他类型的pod

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了控制pod在节点位置与其他类型的pod相关的知识,希望对你有一定的参考价值。

8. 控制pod位置

通过label标签控制pod位置

kubectl  label node   node1 disktype=ssd   #为节点node1做标签

kubectl  label node node1 disktype-    #删除标签

kubectl  get node --show-labels      #查询标签

 

在描述pod规格里面node加入标签

 技术分享图片

9. deamonset

daemonset 特点为每个节点运行一个副本(特殊需求的,存储日志网络等)

kubectl  get daemonset --namespace=kube-system  #查看daemonset

 

kube-flannel-ds和kube-proxy分别在每个节点上运行(属于系统组件)

kubectl  get pod  --namespace=kube-system -o wide  #查看系统组件pod


技术分享图片

10.查看flannel部署文件

cat kube-flannel.yml

---

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

  name: flannel

rules:

  - apiGroups:

      - ""

    resources:

      - pods

    verbs:

      - get

  - apiGroups:

      - ""

    resources:

      - nodes

    verbs:

      - list

      - watch

  - apiGroups:

      - ""

    resources:

      - nodes/status

    verbs:

      - patch

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

  name: flannel

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: flannel

subjects:

- kind: ServiceAccount

  name: flannel

  namespace: kube-system

---

apiVersion: v1

kind: ServiceAccount

metadata:

  name: flannel

  namespace: kube-system

---

kind: ConfigMap

apiVersion: v1

metadata:

  name: kube-flannel-cfg

  namespace: kube-system

  labels:

    tier: node

    app: flannel

data:

  cni-conf.json: |

    {

      "name": "cbr0",

      "type": "flannel",

      "delegate": {

        "isDefaultGateway": true

      }

    }

  net-conf.json: |

    {

      "Network": "10.244.0.0/16",

      "Backend": {

        "Type": "vxlan"

      }

    }

---

apiVersion: extensions/v1beta1

kind: DaemonSet      #指定类型

metadata:

  name: kube-flannel-ds

  namespace: kube-system    #指定空间

  labels:

    tier: node

    app: flannel

spec:

  template:

    metadata:

      labels:

        tier: node

        app: flannel

    spec:

      hostNetwork: true   #指定pod使用的网络

      nodeSelector:    #指定节点标签

        beta.kubernetes.io/arch: amd64

      tolerations:

      - key: node-role.kubernetes.io/master

        operator: Exists

        effect: NoSchedule

      serviceAccountName: flannel

      initContainers:

      - name: install-cni

        image: quay.io/coreos/flannel:v0.9.1-amd64

        command:

        - cp

        args:

        - -f

        - /etc/kube-flannel/cni-conf.json

        - /etc/cni/net.d/10-flannel.conf

        volumeMounts:

        - name: cni

          mountPath: /etc/cni/net.d

        - name: flannel-cfg

          mountPath: /etc/kube-flannel/

      containers:

      - name: kube-flannel

        image: quay.io/coreos/flannel:v0.9.1-amd64

        command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]

        securityContext:

          privileged: true

        env:

        - name: POD_NAME

          valueFrom:

            fieldRef:

              fieldPath: metadata.name

        - name: POD_NAMESPACE

          valueFrom:

            fieldRef:

              fieldPath: metadata.namespace

        volumeMounts:

        - name: run

          mountPath: /run

        - name: flannel-cfg

          mountPath: /etc/kube-flannel/

      volumes:

        - name: run

          hostPath:

            path: /run

        - name: cni

          hostPath:

            path: /etc/cni/net.d

        - name: flannel-cfg

          configMap:

            name: kube-flannel-cfg



查看kube-proxy配置

查看命令行创建的配置

kubectl edit daemonset kube-proxy --namespace=kube-system   类型,名字和系统空间

kubectl edit deployment nginx

 

 

kubectl edit daemonset kube-proxy --namespace=kube-system

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

  creationTimestamp: 2018-07-27T06:21:19Z    #时间

  generation: 1

  labels:

    k8s-app: kube-proxy

  name: kube-proxy

  namespace: kube-system       #指定的空间

  resourceVersion: "355720"

  selfLink: /apis/extensions/v1beta1/namespaces/kube-system/daemonsets/kube-proxy

  uid: 464d59d8-9165-11e8-8aac-00155d3d4613

spec:

  revisionHistoryLimit: 10

  selector:

    matchLabels:

      k8s-app: kube-proxy

  template:

    metadata:

      creationTimestamp: null

      labels:

        k8s-app: kube-proxy

    spec:

      containers:

      - command:

        - /usr/local/bin/kube-proxy

        - --config=/var/lib/kube-proxy/config.conf

        image: k8s.gcr.io/kube-proxy-amd64:v1.10.1

        imagePullPolicy: IfNotPresent

        name: kube-proxy

        resources: {}

        securityContext:

          privileged: true

        terminationMessagePath: /dev/termination-log

        terminationMessagePolicy: File

        volumeMounts:

        - mountPath: /var/lib/kube-proxy

          name: kube-proxy

        - mountPath: /run/xtables.lock

          name: xtables-lock

        - mountPath: /lib/modules

          name: lib-modules

          readOnly: true

      dnsPolicy: ClusterFirst

      hostNetwork: true

      restartPolicy: Always

      schedulerName: default-scheduler

      securityContext: {}

      serviceAccount: kube-proxy

      serviceAccountName: kube-proxy

      terminationGracePeriodSeconds: 30

      tolerations:

      - effect: NoSchedule

        key: node-role.kubernetes.io/master

      - effect: NoSchedule

        key: node.cloudprovider.kubernetes.io/uninitialized

        value: "true"

      volumes:

      - configMap:

          defaultMode: 420

          name: kube-proxy

        name: kube-proxy

      - hostPath:

          path: /run/xtables.lock

          type: FileOrCreate

        name: xtables-lock

      - hostPath:

          path: /lib/modules

          type: ""

        name: lib-modules

  templateGeneration: 1

  updateStrategy:

    rollingUpdate:

      maxUnavailable: 1

    type: RollingUpdate

status:       #daemonset运行时的状态

  currentNumberScheduled: 3

  desiredNumberScheduled: 3

  numberAvailable: 3

  numberMisscheduled: 0

  numberReady: 3

  observedGeneration: 1

  updatedNumberScheduled: 3



11.封装自己的

cat node_ex.yml

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

 name: node-exporter-daemonset

 

spec:

 template:

  metadata:

    labels:

     app: prometheus

  spec:   

   hostNetwork: true   #使用host网络

   containers:

   - name: node-exporter

     image: prom/node-exporter

     imagePullPolicy: IfNotPresent

     command:     #容器启动命令

     - /bin/node_exporter

     - --path.procfs

     - /host/proc

     - --path.sysfs

     - /host/sys

     - --collector.filesystem.ignored-mount-points

     - ^/(syslprocldevlhostlect)($|/)

     volumeMounts:   #挂载的位置

     - name: proc

       mountPath: /host/proc

     - name: sys

       mountPath: /host/sys

     - name: root

       mountPath: /rootfs

   volumes:    #挂在的目录

     - name: proc

       hostPath:

        path: /proc

     - name: sys

       hostPath:

        path: /sys

     - name: root

       hostPath:

        path: /

 



12.job类型

8. jobs

1.工作类容器(完成工作后退出)job

服务类容器类型deployment、daemonset、replicaset

工作类容器

cat job.yml

apiVersion: batch/v1

kind: Job

metadata:

  name: job

spec:

  template:

    metadata:

      name: myjob

    spec:

      containers:

      - name: hello

        image: busybox

        command: ["echo","hello k8s job! "]

      restartPolicy: Never

batch/v1当前job版本

kind指明类型

restartPolicy指定什么情况需要重启容器,对于job只能设置Never或者Onfailure,对于其他的比方应用可以设置always

 

 

查看job结果

kubectl  get job

技术分享图片 

通过pod查看

kubectl  get pod --show-all

技术分享图片 

执行失败的情况

技术分享图片 

查看日志

kubectl  logs job-4jcbj

kubectl  describe pod  job-4jcbj

2.job并行性(多个提高执行效率)

可以通过parallelism实现

 

 

apiVersion: batch/v1

kind: Job

metadata:

  name: job

spec:

  completions: 6 #总数量

  parallelism: 3 #每次并行数

  template:

    metadata:

      name: myjob

    spec:

      containers:

      - name: hello

        image: busybox

        command: ["inval","hello k10s job! "]

      restartPolicy: OnFailure

技术分享图片 

 

3.定时job

cronjob定时计划任务

cat job.yml

apiVersion: batch/v1beta1 #版本

kind: CronJob #定时计划类型

metadata:

  name: hello

spec:

  schedule: "*/1 * * * *" #定时

  jobTemplate: $job模版

   spec:

    template:

      spec:

        containers:

        - name: hello

          image: busybox

          command: ["echo","hello k10s job! "]

        restartPolicy: OnFailure

 

 

因节点原因(或其他原因),只能显示出三个(一直在执行完成)


以上是关于控制pod在节点位置与其他类型的pod的主要内容,如果未能解决你的问题,请参考以下文章

node节点flannel网络问题导致该node上的pod与其他node节点网络不通的排查思路与解决

Docker&Kubernetes ❀ Kubernetes集群Pod控制器 - DaemonSet(DS)

简洁实用轻松学会k8s的pod控制器DaemonSet(DS)

简洁实用轻松学会k8s的pod控制器DaemonSet(DS)

flannel网络问题:node节点flannel网络问题导致该node上的pod与其他node节点网络不通的排查思路与解决

k8s pod控制器详解(DaemonSetJobCronJob)