关于 Kubernetes中deployment的一些笔记

Posted 山河已无恙

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了关于 Kubernetes中deployment的一些笔记相关的知识,希望对你有一定的参考价值。

写在前面


  • 学习K8s涉及到这些,整理笔记加以记忆
  • 博文内容涉及:
    • deployment的创建
    • 通过deployment实现pod的扩容和缩容
    • 通过deployment实现容器的镜像滚动更新、回滚
    • pod的扩容和缩容通过HPA的方式有些问题,也可能是我机器的原因,这个之后解决了在补充。
    • 这一块学的有点乱,之后还需要整理

情不知所起,一往而深;可惜大多由深转浅,相忘江湖,我也如是 ——烽火戏诸侯《雪中悍刀行》


deployment

DeploymentKubernetes v1.2引入的新概念,引入的目的是为了更好地解决Pod的编排问题。为此, Deployment在内部使用了Replica Set来实现目的,无论从Deployment的作用与目的、它的YAML定义,还是从它的具体命令行操作来看,我们都可以把它看作RC的一次升级,两者的相似度超过90%。

Deployment相对于RC的一个最大升级是我们可以随时知道当前Pod“部署”的进度。实际上由于一个Pod的创建、调度、绑定节点及在目标Node上启动对应的容器这一完整过程需要一定的时间,所以我们期待系统启动N个Pod副本的目标状态,实际上是一个连续变化的“部署过程”导致的最终状态。

以下是 Deployments 的典型用例:

Deployments 的典型用例
创建 Deployment 以将 ReplicaSet 上线。 ReplicaSet 在后台创建 Pods。 检查 ReplicaSet 的上线状态,查看其是否成功。
通过更新 DeploymentPodTemplateSpec,声明 Pod 的新状态 。 新的ReplicaSet会被创建,Deployment以受控速率将
如果 Deployment 的当前状态不稳定,回滚到较早的 Deployment 版本。 每次回滚都会更新 Deployment 的修订版本。
扩大 Deployment 规模以承担更多负载。
暂停 Deployment 以应用对 PodTemplateSpec 所作的多项修改, 然后恢复其执行以启动新的上线版本。
使用 Deployment 状态 来判定上线过程是否出现停滞。
清理较旧的不再需要的 ReplicaSet

ReplicaSet

ReplicaSet 的目的是维护一组在任何时候都处于运行状态的 Pod 副本的稳定集合。 因此,它通常用来保证给定数量的、完全相同的 Pod 的可用性。

ReplicaSet 的工作原理
RepicaSet 是通过一组字段来定义的,包括:

  • 一个用来识别可获得的 Pod 的集合的选择算符(选择器)、
  • 一个用来标明应该维护的副本个数的数值、
  • 一个用来指定应该创建新 Pod 以满足副本个数条件时要使用的 Pod 模板等等。

每个 ReplicaSet 都通过根据需要创建和 删除 Pod 以使得副本个数达到期望值, 进而实现其存在价值。当 ReplicaSet 需要创建新的 Pod 时,会使用所提供的 Pod 模板。

ReplicaSet 通过 Pod 上的 metadata.ownerReferences 字段连接到附属 Pod,该字段给出当前对象的属主资源。 ReplicaSet 所获得的 Pod 都在其 ownerReferences 字段中包含了属主 ReplicaSet 的标识信息。正是通过这一连接,ReplicaSet知道它所维护的 Pod 集合的状态, 并据此计划其操作行为。

ReplicaSet 使用其选择算符来辨识要获得的 Pod 集合。如果某个 Pod 没有 OwnerReference 或者其 OwnerReference 不是一个 控制器,且其匹配到 某 ReplicaSet 的选择算符,则该 Pod 立即被此 ReplicaSet 获得。

何时使用 ReplicaSet

ReplicaSet 确保任何时间都有指定数量的 Pod 副本在运行。 然而,Deployment 是一个更高级的概念,它管理 ReplicaSet,并向 Pod 提供声明式的更新以及许多其他有用的功能。 因此,我们建议使用 Deployment 而不是直接使用 ReplicaSet,除非 你需要自定义更新业务流程或根本不需要更新。

这实际上意味着,你可能永远不需要操作ReplicaSet对象:而是使用Deployment,并在 spec 部分定义你的应用

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # modify replicas according to your case
  replicas: 3
  selector:
    matchLabels:
      tier: frontend
  template:
    metadata:
      labels:
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: nginx

学习环境准备

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$dir=k8s-deploy-create ;mkdir $dir;cd $dir
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get ns
NAME              STATUS   AGE
default           Active   78m
kube-node-lease   Active   79m
kube-public       Active   79m
kube-system       Active   79m
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl  create  ns liruilong-deploy-create
namespace/liruilong-deploy-create created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl  config set-context  $(kubectl config current-context)  --namespace=liruilong-deploy-create
Context "kubernetes-admin@kubernetes" modified.
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl config  view | grep namespace
    namespace: liruilong-deploy-create
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$

用yaml文件创建deployment

┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl create deployment web1 --image=nginx --dry-run=client -o yaml > ngixndeplog.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$vim ngixndeplog.yaml

ngixndeplog.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: web1
  name: web1
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web1
  strategy: 
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: web1
    spec:
      containers:
      - image: nginx
        name: nginx
        ports:
        - containerPort: 80
        resources: 
status: 
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl  apply  -f ngixndeplog.yaml
deployment.apps/web1 created

查看创建的deployment

┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get deploy -o wide
NAME   READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES   SELECTOR
web1   2/3     3            2           37s   nginx        nginx    app=web1

查看创建的replicaSet

┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get rs -o wide
NAME              DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES   SELECTOR
web1-66b5fd9bc8   3         3         3       4m28s   nginx        nginx    app=web1,pod-template-hash=66b5fd9bc8

查看创建的pod

┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get pod -o wide
NAME                    READY   STATUS    RESTARTS   AGE     IP               NODE                         NOMINATED NODE   READINESS GATES
web1-66b5fd9bc8-2wpkr   1/1     Running   0          3m45s   10.244.171.131   vms82.liruilongs.github.io   <none>           <none>
web1-66b5fd9bc8-9lxh2   1/1     Running   0          3m45s   10.244.171.130   vms82.liruilongs.github.io   <none>           <none>
web1-66b5fd9bc8-s9w7g   1/1     Running   0          3m45s   10.244.70.3      vms83.liruilongs.github.io   <none>           <none>

Pod的扩容和缩容

在实际生产系统中,我们经常会遇到某个服务需要扩容的场景,也可能会遇到由于资源紧张或者工作负载降低而需要减少服务实例数量的场景。此时我们可以利用DeploymentRC的Scale机制来完成这些工作。Kubermetes对Pod的扩容和缩容操作提供了手动和自动两种模式,

手动模式通过执行kubecl scale命令对一个Deploymen/RC进行Pod副本数量的设置,即可一键完成。

自动模式则需要用户根据某个性能指标或者自定义业务指标,并指定Pod副本数量的范围,系统将自动在这个范围内根据性能指标的变化进行调整。

手动模式

命令行修改kubectl scale deployment web1 --replicas=2

┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl scale deployment web1 --replicas=2
deployment.apps/web1 scaled
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get pod -o wide
NAME                    READY   STATUS    RESTARTS   AGE     IP               NODE                         NOMINATED NODE   READINESS GATES
web1-66b5fd9bc8-2wpkr   1/1     Running   0          8m19s   10.244.171.131   vms82.liruilongs.github.io   <none>           <none>
web1-66b5fd9bc8-s9w7g   1/1     Running   0          8m19s   10.244.70.3      vms83.liruilongs.github.io   <none>           <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$

edit的方式修改kubectl edit deployment web1

┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl edit deployment web1
deployment.apps/web1 edited
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get pod -o wide
NAME                    READY   STATUS              RESTARTS   AGE     IP               NODE                         NOMINATED NODE   READINESS GATES
web1-66b5fd9bc8-2wpkr   1/1     Running             0          9m56s   10.244.171.131   vms82.liruilongs.github.io   <none>           <none>
web1-66b5fd9bc8-9lnds   0/1     ContainerCreating   0          6s      <none>           vms82.liruilongs.github.io   <none>           <none>
web1-66b5fd9bc8-s9w7g   1/1     Running             0          9m56s   10.244.70.3      vms83.liruilongs.github.io   <none>           <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$

修改yaml文件方式

┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$sed  -i 's/replicas: 3/replicas: 2/' ngixndeplog.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl  apply  -f ngixndeplog.yaml
deployment.apps/web1 configured
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get pod -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP               NODE                         NOMINATED NODE   READINESS GATES
web1-66b5fd9bc8-2wpkr   1/1     Running   0          12m   10.244.171.131   vms82.liruilongs.github.io   <none>           <none>
web1-66b5fd9bc8-s9w7g   1/1     Running   0          12m   10.244.70.3      vms83.liruilongs.github.io   <none>           <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$

HPA自动模式

从Kubernetes v1.1版本开始,新增了名为Horizontal Pod Autoscaler (HPA)的控制器,用于实现基于CPU使用率进行自动Pod扩容和缩容的功能。

HPA控制器基于Master的kube-controller-manager服务启动参数–horizontal-pod-autoscaler-sync-period定义的时长(默认值为30s),周期性地监测目标Pod的CPU使用率,并在满足条件时对ReplicationController或Deployment中的Pod副本数量进行调整,以符合用户定义的平均Pod CPU使用率。Pod CPU使用率来源于metric server 组件,所以需要预先安装好metric server .

HPA 可以基于内存,CPU,并发量来动态伸缩

创建HPA时可以使用kubectl autoscale 命令进行快速创建或者使用yaml配置文件进行创建。在创建HPA之前,需要已经存在一个DeploymentRC对象,并且该Deployment/RC中的Pod必须定义resources.requests.cpu的资源请求值,如果不设置该值,则metric server 将无法采集到该Pod的CPU使用情况,会导致HPA无法正常工作。

设置metric server 监控

┌──[root@vms81.liruilongs.github.io]-[~/ansible/metrics/deploy/1.8+]
└─$kubectl top nodes
NAME                         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
vms81.liruilongs.github.io   401m         20%    1562Mi          40%
vms82.liruilongs.github.io   228m         11%    743Mi           19%
vms83.liruilongs.github.io   221m         11%    720Mi           18%

配置HPA
设置副本数是最小2个,最大10个,CPU超过80

┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl autoscale deployment  web1  --min=2 --max=10 --cpu-percent=80
horizontalpodautoscaler.autoscaling/web1 autoscaled
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl  get hpa
NAME   REFERENCE         TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
web1   Deployment/web1   <unknown>/80%   2         10        2          15s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl delete hpa web1
horizontalpodautoscaler.autoscaling "web1" deleted
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$

解决当前cpu的使用量为unknown,这个占时没有解决办法
ngixndeplog.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: web1
  name: web1
spec:
  replicas: 2
  selector:
    matchLabels:
      app: web1
  strategy: 
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: web1
    spec:
      containers:
      - image: nginx
        name: nginx
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 500m
          requests:
            cpu: 200m

测试HPA

┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$cat ngixndeplog.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: nginx
  name: nginxdep
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  strategy: 
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: web
        resources:
          requests:
            cpu: 100m
      restartPolicy: Always

设置HPAkubectl autoscale deployment nginxdep --max=5 --cpu-percent=50

┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl  get  deployments.apps
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
nginxdep   2/2     2            2           8m8s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl autoscale deployment nginxdep --max=5 --cpu-percent=50
horizontalpodautoscaler.autoscaling/nginxdep autoscaled
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get pods -o wide
NAME                        READY   STATUS    RESTARTS   AGE   IP               NODE                         NOMINATED NODE   READINESS GATES
nginxdep-645bf755b9-27hzn   1/1     Running   0          97s   10.244.171.140   vms82.liruilongs.github.io   <none>           <none>
nginxdep-645bf755b9-cb57p   1/1     Running   0          97s   10.244.70.10     vms83.liruilongs.github.io   <none>           <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl get hpa -o wide
NAME       REFERENCE             TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
nginxdep   Deployment/nginxdep   <unknown>/50%   1         5         2          21s

创建一个svc,然后模拟调用

┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl  expose  --name=nginxsvc deployment  nginxdep  --port=80
service/nginxsvc exposed
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl  get svc -o wide
NAME       TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE   SELECTOR
nginxsvc   ClusterIP   10.104.147.65   <none>        80/TCP    9s    app=nginx

测试svc的调用

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.83 -m  shell -a "curl http://10.104.147.65 "
192.168.26.83 | CHANGED | rc=0 >>
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html  color-scheme: light dark; 
body  width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; 
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   615  100   615    0     0   304k      0 --:--:-- --:--:-- --:--:--  600k

安装http-tools(IP压力测试工具包),模拟调用

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.83 -m shell -a "yum install httpd-tools -y"
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible 192.168.26.83 -m  shell -a "ab -t 600 -n 1000000 -c 1000 http://10.104.147.65/ " &
[1] 123433
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

观察pod的变化

deployment-健壮性测试

┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl  scale  deployment  nginxdep  --replicas=3
deployment.apps/nginxdep scaled
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl  get pods -o wide
NAME                        READY   STATUS    RESTARTS        AGE   IP               NODE                         NOMINATED NODE   READINESS GATES
nginxdep-645bf755b9-27hzn   1/1     Running   1 (3m19s ago)   47m   10.244.171.141   vms82.liruilongs.github.io   <none>           <none>
nginxdep-645bf755b9-4dkpp   1/1     Running   0               30s   10.244.171.144   vms82.liruilongs.github.io   <none>           <none>
nginxdep-645bf755b9-vz5qt   1/1     Running   0               30s   10.244.70.11     vms83.liruilongs.github.io   <none>           <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$

vms83.liruilongs.github.io关机,等一段时间就会发现,pod都会在vms82.liruilongs.github.io上运行

┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl  get nodes
NAME                         STATUS     ROLES                  AGE   VERSION
vms81.liruilongs.github.io   Ready      control-plane,master   47h   v1.22.2
vms82.liruilongs.github.io   Ready      <none>                 47h   v1.22.2
vms83.liruilongs.github.io   NotReady   <none>                 47h   v1.22.2
┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl get pods -o wide
NAME                        READY   STATUS        RESTARTS      AGE     IP               NODE                         NOMINATED NODE   READINESS GATES
nginxdep-645bf755b9-27hzn   1/1     Running       1 (20m ago)   64m     10.244.171.141   vms82.liruilongs.github.io   <none>           <none>
nginxdep-645bf755b9-4dkpp   1/1     Running       0             17m     10.244.171.144   vms82.liruilongs.github.io   <none>           <none>
nginxdep-645bf755b9-9hzf2   1/1     Running       0             9m48s   10.244.171.145   vms82.liruilongs.github.io   <none>           <none>
nginxdep-645bf755b9-vz5qt   1/1     Terminating   0             17m     10.244.70.11     vms83.liruilongs.github.io   <none>           <none>
┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl get pods -o wide
NAME                        READY   STATUS    RESTARTS      AGE   IP               NODE                         NOMINATED NODE   READINESS GATES
nginxdep-645bf755b9-27hzn   1/1     Running   1 (27m ago)   71m   10.244.171.141   vms82.liruilongs.github.io   <none>           <none>
nginxdep-645bf755b9-4dkpp   1/1     Running   0             24m   10.244.171.144   vms82.liruilongs.github.io   <none>           <none>
nginxdep-645bf755b9-9hzf2   1/1     Running   0             16m   10.244.171.145   vms82.liruilongs.github.io   <none>           <none>
┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl  top pods
NAME                        CPU(cores)   MEMORY(bytes)
nginxdep-645bf755b9-27hzn   0m           4Mi
nginxdep-645bf755b9-4dkpp   0m           1Mi
nginxdep-645bf755b9-9hzf2   0m           1Mi
┌──[root@vms81.liruilongs.github.io]-[~]
└─$

vms83.liruilongs.github.io重新启动,pod并不会返回到vms83.liruilongs.github.io上运行

┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl get nodes
NAME                         STATUS   ROLES                  AGE   VERSION
vms81.liruilongs.github.io   Ready    control-plane,master   2d    v1.22.2
vms82.liruilongs.github.io   Ready    <none>                 2d    v1.22.2
vms83.liruilongs.github.io   Ready    <none>                 2d    v1.22.2
┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl get pods -o wide
NAME                        READY   STATUS    RESTARTS      AGE   IP               NODE                         NOMINATED NODE   READINESS GATES
nginxdep-645bf755b9-27hzn   1/1     Running   1 (27m ago)   71m   10.244.171.141   vms82.liruilongs.github.io   <none>           <none>
nginxdep-645bf755b9-4dkpp   1/1     Running   0             24m   10.244.171.144   vms82.liruilongs.github.io   <none>           <none>
nginxdep-645bf755b9-9hzf2   1/1     Running   0             16m   10.244.171.145   vms82.liruilongs.github.io   <none>           <none>
┌──[root@vms81.liruilongs.github.io]-[~]
└─$

deployment-更新回滚镜像

当集群中的某个服务需要升级时,我们需要停止目前与该服务相关的所有Pod,然后下载新版本镜像并创建新的Pod,如果集群规模比较大,则这个工作就变成了一个挑战,而且先全部停止然后逐步升级的方式会导致较长时间的服务不可用。

Kuberetes提供了滚动升级功能来解决上述问题。如果Pod是通过Deployment创建的,则用户可以在运行时修改Deployment的Pod定义(spec.template)或镜像名称,并应用到Deployment对象上,系统即可完成Deployment的自动更新操作。如果在更新过程中发生了错误,则还可以通过回滚(Rollback)操作恢复Pod的版本。
环境准备

┌──[root@vms81.liruilongs.github.io]-[~]
└─$kubectl scale deployment  nginxdep  --replicas=5
deployment.apps/nginxdep scaled
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "docker pull nginx:1.9"
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$ansible node -m shell -a "docker pull nginx:1.7.9"

deployment滚动更新镜像

现在pod镜像需要更新为 Nginx l.9,我们可以通 kubectl set image deployment/deploy名字 容器名字=nginx:1.9 --recordDeployment设置新的镜像名称

┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl set image deployment/nginxdep web=nginx:1.9 --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginxdep image updated
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl  get pods
NAME                        READY   STATUS              RESTARTS      AGE
nginxdep-59d7c6b6f-6hdb8    0/1     ContainerCreating   0             26s
nginxdep-59d7c6b6f-bd5z2    0/1     ContainerCreating   0             26s
nginxdep-59d7c6b6f-jb2j7    1/1     Running             0             26s
nginxdep-59d7c6b6f-jd5df    0/1     ContainerCreating   0             4s
nginxdep-645bf755b9-27hzn   1/1     Running             1 (51m ago)   95m
nginxdep-645bf755b9-4dkpp   1/1     Running             0             48m
nginxdep-645bf755b9-hkcqx   1/1     Running             0             18m
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl  get pods
NAME                        READY   STATUS              RESTARTS      AGE
nginxdep-59d7c6b6f-6hdb8    0/1     ContainerCreating   0             51s
nginxdep-59d7c6b6f-bd5z2    1/1     Running             0             51s
nginxdep-59d7c6b6f-jb2j7    1/1     Running             0             51s
nginxdep-59d7c6b6f-jd5df    0/1     ContainerCreating   0             29s
nginxdep-59d7c6b6f-prfzd    0/1     ContainerCreating   0             14s
nginxdep-645bf755b9-27hzn   1/1     Running             1 (51m ago)   96m
nginxdep-645bf755b9-4dkpp   1/1     Running             0             49m
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl  get pods
NAME                       READY   STATUS    RESTARTS   AGE
nginxdep-59d7c6b6f-6hdb8   1/1     Running   0          2m28s
nginxdep-59d7c6b6f-bd5z2   1/1     Running   0          2m28s
nginxdep-59d7c6b6f-jb2j7   1/1     Running   0          2m28s
nginxdep-59d7c6b6f-jd5df   1/1     Running   0          2m6s
nginxdep-59d7c6b6f-prfzd   1/1     Running   0          111s

可以通过age的时间看到nginx的版本由latest滚动升级到 1.9的版本然后到1.7.9版本

┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl set image deployment/nginxdep web=nginx:1.7.9 --record
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/nginxdep image updated
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-deploy-create]
└─$kubectl  get pods
NAME                        READY   STATUS    RESTARTS   AGE
nginxdep-66587778f6-9jqfz   1/1     Running   Kubernetes Deployment滚动更新场景分析

关于 Kubernetes中ReplicaSet和ReplicationController的一些笔记

[K8s]Kubernetes-工作负载(中)

k8s基本概念-如何使用Deployments

08- Kubernetes-Deployment入门

03.kubernetes笔记 Pod控制器(二) Deployment