Kubernetes--应用滚动升级

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Kubernetes--应用滚动升级相关的知识,希望对你有一定的参考价值。

参考技术A

滚动升级(rolling update)是每一次只更新一小部分的副本,成功后再继续更新更多的副本,最终把所有副本更新。
好处:不用停机,实现平滑的升级。

如下图所示(网上找的),

2.执行部署并查看

注意当前httpd的版本为:2.2.31 ,下面开始升级

更新完成后,httpd的镜像变为了httpd:2.2.32了。

从Message中可以看出,两个Replica Set是逐步更新Pod的,
httpd-9658687dd是最开始的,有3个Pod,httpd-76c8bd9f65是新生成,有0个Pod,依次
httpd-76c8bd9f65 up 为1,httpd-9658687dd down 为2
httpd-76c8bd9f65 up 为2,httpd-9658687dd down 为1
httpd-76c8bd9f65 up 为3,httpd-9658687dd down 为0
当然,滚动升级每次更新的Pod数量是可以指定的,通过两个参数 maxSurge maxUnavailable 控制。

kubectl apply在每次更新应用的时候,都会记录下当前的配置,保存为一个版本revision,默认情况kubernetes只会保留最近的几个revision,但可以在Deployment的配置文件中指定保存的revision的数量,通过 revisionHistoryLimit 属性设置。

将上面的httpd.yaml文件复制三份,分别命名为httpd1.yaml,httpd2.yaml,httpd3.yaml,对应镜像修改为httpd:2.4.16,httpd:2.4.17,httpd:2.4.18.

由上面信息可知,这一次版本从2.4.16升级到2.4.17再升级到2.4.18,总共有三次操作,而且这一次执行kubectl apply时候加上了 --record .

查看历史版本

这里的CHANGE_CAUSE就是加上了--record的结果。REVISION就是版本,如果想回退到revision=1,可以执行命令:
kubectl rollout undo deployment httpd --to-revision=1
如果是想回退到上一个版本,则可以不用指定--to-revision

Kubernetes编排工具

Kubernetes编排工具

K8S是一种以容器未中心的基础架构,提供集群内:容器部署、容器扩展、容器管理的开源平台

Kubernetes是一个以容器为中心的基础架构,可以实现在物理集群或虚拟机集群上调度和允许容器,提供容器自动部署、扩展和管理的开源平台。满足了应用程序在生产环境中的一些通用需求:应用实例副本、水平自动扩展、命名与发现、负载均衡、滚动升级、资源监控等。

K8S重要特性

服务的自动发现与负载均衡
自愈
滚动升级和一键回滚
弹性伸缩

K8S核心组件

组件名称 说明
etcd 保存了整个集群的状态;
apiserver 提供了资源操作的唯一入口,并提供认证、授权、访问控制、API注册和发现等机制;
controller manager 负责维护集群的状态,比如故障检测、自动扩展、滚动更新等;
scheduler 负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上;
kubelet 负责维护容器的生命周期,同时也负责Volume(CVI)和网络(CNI)的管理;
Container runtime 负责镜像管理以及Pod和容器的真正运行(CRI);
kube-proxy 负责为Service提供cluster内部的服务发现和负载均衡;

核心组件结构图

技术图片

除了核心组件,还有一些推荐的Add-ons:

组件名称 说明
kube-dns 负责为整个集群提供DNS服务
Ingress Controller 为服务提供外网入口
Heapster 提供资源监控
Dashboard 提供GUI
Federation 提供跨可用区的集群
Fluentd-elasticsearch 提供集群日志采集、存储与查询

K8S安装部署

主机名 IP地址
k8s-master 10.0.1.11
k8s-node01 10.0.1.12
k8s-node02 10.0.1.13
#修改ip地址和hosts文件(所以主机)
[root@k8s-master ~]# vim /etc/hosts
10.0.1.11 k8s-master
10.0.1.12 k8s-node01
10.0.1.13 k8s-node02
#所以节点安装docker-1.12.6-68
[root@k8s-master tools]# ll
total 114652
-rw-r--r-- 1 root root 36208640 Dec 17  2018 docker-k8s.tar
-rw-r--r-- 1 root root 50826881 Jul 29 10:34 k8s-master.zip
-rw-r--r-- 1 root root 30361600 Dec 17  2018 k8s-node.tar
[root@k8s-master tools]# tar xf docker-k8s.tar
[root@k8s-master tools]# cd pkg/
[root@k8s-master pkg]# yum -y localinstall *.rpm
[root@k8s-master pkg]# systemctl start docker
[root@k8s-master pkg]# docker version
Client:
 Version:         1.12.6
 API version:     1.24
 Package version: docker-1.12.6-68.gitec8512b.el7.centos.x86_64
 Go version:      go1.8.3
 Git commit:      ec8512b/1.12.6
 Built:           Mon Dec 11 16:08:42 2017
 OS/Arch:         linux/amd64

Server:
 Version:         1.12.6
 API version:     1.24
 Package version: docker-1.12.6-68.gitec8512b.el7.centos.x86_64
 Go version:      go1.8.3
 Git commit:      ec8512b/1.12.6
 Built:           Mon Dec 11 16:08:42 2017
 OS/Arch:         linux/amd64
[root@k8s-master pkg]# 

##master节点安装etcd
[root@k8s-master ~]# yum -y insstall etcd
#配置ecd
[root@k8s-master ~]# vim /etc/etcd/etcd.conf
6 ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
21 ETCD_ADVERTISE_CLIENT_URLS="http://10.0.1.11:2379"
[root@k8s-master ~]# systemctl start etcd.service 
[root@k8s-master ~]# systemctl enable etcd.service 

#master节点安装kubernetes
[root@k8s-master tools]# cd k8s-master/
[root@k8s-master k8s-master]# ll
-rw-r--r-- 1 root root 15012748 Jul  4  2017 kubernetes-client-1.5.2-0.7.git269f928.el7.x86_64.rpm
-rw-r--r-- 1 root root 26082448 Jul  4  2017 kubernetes-master-1.5.2-0.7.git269f928.el7.x86_64.rpm
[root@k8s-master k8s-master]# yum -y localinstall kubernetes-*.rpm

#配置Master节点
[root@k8s-master k8s-master]# grep -Ev ‘#|^$‘ /etc/kubernetes/apiserver
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
KUBE_API_PORT="--port=8080"
KUBELET_PORT="--kubelet-port=10250"
KUBE_ETCD_SERVERS="--etcd-servers=http://10.0.1.11:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""
[root@k8s-master k8s-master]# 
[root@k8s-master k8s-master]# grep -Ev ‘#|^$‘ /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://10.0.1.11:8080"
[root@k8s-master k8s-master]# systemctl enable kube-apiserver.service 
[root@k8s-master k8s-master]# systemctl start kube-apiserver.service 
[root@k8s-master k8s-master]# systemctl enable kube-controller-manager.service 
[root@k8s-master k8s-master]# systemctl start kube-controller-manager.service 
[root@k8s-master k8s-master]# systemctl start kube-scheduler.service 
[root@k8s-master k8s-master]# systemctl enable kube-scheduler.service 

#master集群基础组件状况
[root@k8s-master ~]# kubectl get componentstatus 
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
scheduler            Healthy   ok                  
[root@k8s-master ~]# 


#node节点安装kubernetes
[root@k8s-node01 tools]# mkdir k8s-node
[root@k8s-node01 tools]# tar xf k8s-node.tar -C k8s-node
[root@k8s-node01 tools]# cd k8s-node/
[root@k8s-node01 k8s-node]# yum -y localinstall *.rpm

#配置node节点
[root@k8s-node01 ~]# grep -Ev ‘#|^$‘ /etc/kubernetes/config
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://10.0.1.11:8080"
[root@k8s-node01 ~]# 
[root@k8s-node01 ~]# grep -Ev ‘#|^$‘ /etc/kubernetes/kubelet
KUBELET_ADDRESS="--address=0.0.0.0"
KUBELET_PORT="--port=10250"
KUBELET_HOSTNAME="--hostname-override=10.0.1.12"
KUBELET_API_SERVER="--api-servers=http://10.0.1.11:8080"
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"
KUBELET_ARGS=""
[root@k8s-node01 ~]# 
[root@k8s-node01 ~]# systemctl enable kubelet.service
[root@k8s-node01 ~]# systemctl start kubelet.service
[root@k8s-node01 ~]# systemctl enable kube-proxy.service
[root@k8s-node01 ~]# systemctl start kube-proxy.service

#master节点检查
[root@k8s-master ~]# kubectl get nodes
NAME        STATUS    AGE
10.0.1.12   Ready     11m
10.0.1.13   Ready     2s
[root@k8s-master ~]# 

#所以节点安装flannel网络
用于跨主机容器间通信
[root@k8s-master ~]# yum -y install flannel
[root@k8s-master ~]# grep -Ev ‘#|^$‘ /etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS="http://10.0.1.11:2379"
FLANNEL_ETCD_PREFIX="/atomic.io/network"
[root@k8s-master ~]# 
#master节点配置网络
[root@k8s-master ~]# etcdctl mk /atomic.io/network/config ‘{ "Network": "172.16.0.0/16" }‘
[root@k8s-master ~]# etcdctl get /atomic.io/network/config
{ "Network": "172.16.0.0/16" }

#master节点重启
[root@k8s-master ~]# systemctl enable flanneld.service 
[root@k8s-master ~]# systemctl start flanneld.service 
[root@k8s-master ~]# service docker restart
[root@k8s-master ~]# systemctl restart kube-apiserver.service
[root@k8s-master ~]# systemctl restart kube-controller-manager.service
[root@k8s-master ~]# systemctl restart kube-scheduler.service

#node节点重启
[root@k8s-node02 ~]# systemctl enable flanneld.service 
[root@k8s-node02 ~]# systemctl start flanneld.service 
[root@k8s-node02 ~]# service docker restart
[root@k8s-node02 ~]# systemctl restart kubelet.service
[root@k8s-node02 ~]# systemctl restart kube-proxy.service

配置master为镜像仓库

#master节点
[root@k8s-master ~]# grep -Ev ‘^$|#‘ /etc/sysconfig/docker
OPTIONS=‘--selinux-enabled --log-driver=journald --signature-verification=false --registry-mirror=https://registry.docker-cn.com --insecure-registry=10.0.1.11:5000‘
if [ -z "${DOCKER_CERT_PATH}" ]; then
    DOCKER_CERT_PATH=/etc/docker
fi
[root@k8s-master ~]# 
[root@k8s-master ~]# systemctl restart docker

#node节点
[root@k8s-node01 ~]# grep -Ev ‘^$|#‘ /etc/sysconfig/docker
OPTIONS=‘--selinux-enabled --log-driver=journald --signature-verification=false --registry-mirror=https://registry.docker-cn.com --insecure-registry=10.0.1.11:5000‘
if [ -z "${DOCKER_CERT_PATH}" ]; then
    DOCKER_CERT_PATH=/etc/docker
fi
[root@k8s-node01 ~]# 
[root@k8s-node01 ~]# systemctl restart docker

创建一个pod

k8s一切皆资源

[root@k8s-master ~]# cat k8s_pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: web
spec:
  containers:
    - name: nginx
      image: 10.0.1.11:5000/nginx:1.13
      ports:
        - containerPort: 80
[root@k8s-master ~]# 
[root@k8s-master ~]# docker pull nginx:1.13
[root@k8s-master ~]# docker tag docker.io/nginx:1.13 10.0.1.11:5000/nginx:1.13
[root@k8s-master ~]# docker push 10.0.1.11:5000/nginx:1.13

#创建一个资源
[root@k8s-master ~]# kubectl create -f k8s_pod.yaml 
pod "nginx" created
[root@k8s-master ~]# 

#查看资源状态
[root@k8s-master ~]# kubectl get pods
NAME      READY     STATUS              RESTARTS   AGE
nginx     0/1       ContainerCreating   0          24s
[root@k8s-master ~]# 

#动态查看描述
[root@k8s-master ~]# kubectl describe pod nginx

#node节点修改
[root@k8s-node02 ~]# vim /etc/kubernetes/kubelet 
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=10.0.1.11:5000/pod-infrastructure:latest"
[root@k8s-node02 ~]# systemctl restart kubelet.service 

#master节点配置
[root@k8s-master ~]# docker pull docker.io/tianyebj/pod-infrastructure
[root@k8s-master ~]# docker tag docker.io/tianyebj/pod-infrastructure 10.0.1.11:5000/pod-infrastructure:latest
[root@k8s-master ~]# docker push 10.0.1.11:5000/pod-infrastructure:latest

#获取已运行pod状态
[root@k8s-master ~]# kubectl get pods
NAME      READY     STATUS    RESTARTS   AGE
nginx     1/1       Running   0          22m
[root@k8s-master ~]# 
[root@k8s-master ~]# kubectl get pods -o wide
NAME      READY     STATUS    RESTARTS   AGE       IP            NODE
nginx     1/1       Running   0          1h        172.16.42.2   10.0.1.13
[root@k8s-master ~]# ping 172.16.42.2
64 bytes from 172.16.42.2: icmp_seq=1 ttl=61 time=1.23 ms

#删除pod
[root@k8s-master ~]# kubectl delete pod nginx

K8S核心概念

pod

pod是k8s的基本操作单元,也是应用运行的载体,整个k8s系统都是围绕pod展开的,比如如何部署运行pod,如何保证pod数量,如何访问pod等.另外,pod是一个或多个机关容器的集合,这可以说是一大创新点,提供了一种容器的组合模型.

master创建pod资源,一个pod资源,至少启动两个容器,一个业务容器,一个pod

#yml编写帮助
[root@k8s-master ~]# kubectl explain pod.spec

#查看资源状态及修改
[root@k8s-master ~]# kubectl edit pod nginx 

Replication Controller

RC是K8S中的另一个核心概念,应用托管在K8S之后,K8S需要保证应用能够持续运行,这是RC的工作内容,它会确保如何时间K8S中都有指定数量的Pod运行.在此基础上,RC还提供了一些更高级的特性,比如滚动升级、升级回滚等。

rc的作用保证pod的高可用

[root@k8s-master ~]# cat nginx-rc.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: myweb
spec:
  replicas: 2
  selector:
    app: myweb
  template:
    metadata:
      labels:
        app: myweb
    spec:
      containers:
      - name: myweb
        image: 10.0.1.11:5000/nginx:1.13
        ports:
        - containerPort: 80
[root@k8s-master ~]# kubectl create -f nginx-rc.yaml 
replicationcontroller "myweb" created
[root@k8s-master ~]# kubectl get pod -o wide
NAME          READY     STATUS    RESTARTS   AGE       IP            NODE
myweb-lc9qh   1/1       Running   0          35s       172.16.42.3   10.0.1.13
myweb-rqs87   1/1       Running   0          35s       172.16.94.2   10.0.1.12
nginx         1/1       Running   0          55m       172.16.42.2   10.0.1.13

#扩容pod的副本数目
[root@k8s-master ~]# kubectl scale replicationcontroller myweb --replicas=10

#缩容pod的副本数目
[root@k8s-master ~]# kubectl scale replicationcontroller myweb --replicas=2

#滚动升级
[root@k8s-master ~]# docker pull nginx
[root@k8s-master ~]# docker tag docker.io/nginx:latest 10.0.1.11:5000/nginx:latest
[root@k8s-master ~]# docker push 10.0.1.11:5000/nginx:latest
[root@k8s-master ~]# cat nginx-rcv.1.yaml
apiVersion: v1
kind: ReplicationController
metadata:
  name: mywebv2
spec:
  replicas: 3
  selector:
    app: mywebv2
  template:
    metadata:
      labels:
        app: mywebv2
    spec:
      containers:
      - name: mywebv2
        image: 10.0.1.11:5000/nginx:latest
        ports:
        - containerPort: 80
[root@k8s-master ~]# kubectl rolling-update myweb -f nginx-rcv.1.yaml --update-period=5s

#回滚
[root@k8s-master ~]# kubectl rolling-update mywebv2 -f nginx-rc.yaml --update-period=1s

Service

在K8S中,在受到RC调控的时候,Pod副本是变化的,对于虚拟IP也是变化的,比如发生迁移或者伸缩的时候,这对于Pod的服务作来说是不可接受的.K8S中的service是一种抽象概念,它定义了一个Pod逻辑集合以及访问它们的策略,Service同Pod的关联同样是居于Label来完成的.service的目标是提供一种桥梁,它会为服务作提供一个固定访问地址,用于在访问时重定向到相应的后端,这使得非K8S原生应用程序,在无须为K8S编写特定代码的前提下,轻松访问后端.

技术图片

K8S三种IP

类型 说明
Node IP 节点设备的IP,如物理机,虚拟机等容器宿主的实际IP。
Pod IP Pod 的IP地址,是根据docker0网格IP段进行分配的。
Cluster IP Service的IP,是一个虚拟IP,仅作用于service对象,由k8s管理和分配,需要结合service port才能使用,单独的IP没有通信功能,集群外访问需要一些修改。

把业务在容器中运行起来
保证容器的高可用,创建rc,镜像为业务镜像
保证容器中的业务被外界访问,创建svc资源

[root@k8s-master ~]# cat nginx-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: myweb
spec:
  type: NodePort
  ports:
    - port: 80
      nodePort: 30000
      targetPort: 80
  selector:
    app: myweb

[root@k8s-master ~]# kubectl create -f nginx-svc.yaml 
[root@k8s-master ~]# kubectl get service
NAME         CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   10.254.0.1      <none>        443/TCP        5h
myweb        10.254.113.57   <nodes>       80:30000/TCP   11m
[root@k8s-master ~]# 

#修改service端口范围方式
[root@k8s-master ~]# vim /etc/kubernetes/apiserver
KUBE_API_ARGS="--service-node-port-range=3000-50000"
[root@k8s-master ~]# systemctl restart kube-apiserver.service 

#更新资源
[root@k8s-master ~]# kubectl apply -f nginx-svc.yaml 

技术图片

Deployment

K8S提供了一种更加简单的更新RC和Pod的机制,叫做Deployment.通过在Deployment中描述你所期望的集群状态,Deployment Controller会将现在得到集群状态在一个可控的速度下逐步更新成你所期望的集群状态.Deployment主要职责同样是为了保证Pod的数量和健康,90%的功能与Replication Controller完全一样,可以看做新一代的Replication Controller.但是,他又具备了Replication Controller之外的新特性:
Replication Controller全部功能:Deployment继承了上面描述的Replication Controller全部功能
事件和状态查看:可以查看Deployment的升级详细进度和状态
回滚:当升级Pod镜像或者相关参数的时候发现问题,可以使用回滚操作回滚到上一个稳定的版本或者指定的版本.
版本记录:每一次对Deployment的操作,都能保存下来,给予后续可能的回滚使用
暂停和启动:对于每一次升级,都能够随时暂停和启动
多种升级方案:Recreate---删除所有已存在的pod,重新创建新的;RollingUpdate---滚动升级,逐步替换的策略,同时滚动升级时,支持更多的附加参数,例如设置最大不可用pod数量,最小升级间隔时间等等.

#编写Deployment文件
[root@k8s-master ~]# cat nginx-deploy.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: 10.0.1.11:5000/nginx:1.13
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 100m
          requests:
            cpu: 100m
#启动Deployment
[root@k8s-master ~]# kubectl create -f nginx-deploy.yaml 
deployment "nginx-deployment" created

#查看状态信息
[root@k8s-master ~]# kubectl get all 
NAME                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/nginx-deployment   3         3         3            3           6m

NAME             CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
svc/kubernetes   10.254.0.1   <none>        443/TCP   1d

NAME                             DESIRED   CURRENT   READY     AGE
rs/nginx-deployment-2912237156   3         3         3         6m

NAME                                   READY     STATUS    RESTARTS   AGE
po/nginx-deployment-2912237156-j6grn   1/1       Running   0          6m
po/nginx-deployment-2912237156-rlm63   1/1       Running   0          6m
po/nginx-deployment-2912237156-ssfl0   1/1       Running   0          6m
[root@k8s-master ~]# 

#编写service文件
[root@k8s-master ~]# cat nginx-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-deployment
spec:
  type: NodePort
  ports:
    - port: 80
      nodePort: 30000
      targetPort: 80
  selector:
    app: nginx

#启动service
[root@k8s-master ~]# kubectl create -f nginx-svc.yaml 
service "nginx-deployment" created

#进行访问测速
[root@k8s-master ~]# curl -I 10.0.1.12:30000
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Sun, 27 Oct 2019 10:41:35 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
Connection: keep-alive
ETag: "5acb8e45-264"
Accept-Ranges: bytes
[root@k8s-master ~]# 

#滚动升级
[root@k8s-master ~]# kubectl edit deployment nginx-deployment
      - image: 10.0.1.11:5000/nginx:latest
[root@k8s-master ~]# kubectl get all
NAME                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/nginx-deployment   3         3         3            2           37s

NAME                   CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
svc/kubernetes         10.254.0.1     <none>        443/TCP        1d
svc/nginx-deployment   10.254.119.1   <nodes>       80:30000/TCP   29s

NAME                             DESIRED   CURRENT   READY     AGE
rs/nginx-deployment-2912237156   0         0         0         37s
rs/nginx-deployment-3144759342   3         3         3         3s

NAME                                   READY     STATUS    RESTARTS   AGE
po/nginx-deployment-3144759342-3vsqw   1/1       Running   0          1s
po/nginx-deployment-3144759342-kk6bg   1/1       Running   0          3s
po/nginx-deployment-3144759342-s1rf4   1/1       Running   0          3s
[root@k8s-master ~]# 

#测试访问
[root@k8s-master ~]# curl -I 10.0.1.12:30000
HTTP/1.1 200 OK
Server: nginx/1.17.5
Date: Sun, 27 Oct 2019 11:01:51 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 22 Oct 2019 14:30:00 GMT
Connection: keep-alive
ETag: "5daf1268-264"
Accept-Ranges: bytes

[root@k8s-master ~]# 

#回滚
[root@k8s-master ~]# kubectl rollout history deployment nginx-deployment
deployments "nginx-deployment"
REVISION	CHANGE-CAUSE
1		<none>
2		<none>

[root@k8s-master ~]# kubectl rollout undo deployment nginx-deployment deployment --to-revision=1
[root@k8s-master ~]# kubectl get all
NAME                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/nginx-deployment   3         3         3            3           3m

NAME                   CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
svc/kubernetes         10.254.0.1     <none>        443/TCP        1d
svc/nginx-deployment   10.254.119.1   <nodes>       80:30000/TCP   3m

NAME                             DESIRED   CURRENT   READY     AGE
rs/nginx-deployment-2912237156   3         3         3         3m
rs/nginx-deployment-3144759342   0         0         0         2m

NAME                                   READY     STATUS    RESTARTS   AGE
po/nginx-deployment-2912237156-51569   1/1       Running   0          31s
po/nginx-deployment-2912237156-6wbpw   1/1       Running   0          30s
po/nginx-deployment-2912237156-pgt0d   1/1       Running   0          31s

[root@k8s-master ~]# curl -I 10.0.1.12:30000
HTTP/1.1 200 OK
Server: nginx/1.13.12
Date: Sun, 27 Oct 2019 11:04:03 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Mon, 09 Apr 2018 16:01:09 GMT
Connection: keep-alive
ETag: "5acb8e45-264"
Accept-Ranges: bytes

[root@k8s-master ~]# 

#设置改变记录分发
[root@k8s-master ~]# kubectl run nginx --image=10.0.1.11:5000/nginx:1.13 --replicas=3 --record
deployment "nginx" created
[root@k8s-master ~]# kubectl rollout history deployment nginx
deployments "nginx"
REVISION	CHANGE-CAUSE
1		kubectl run nginx --image=10.0.1.11:5000/nginx:1.13 --replicas=3 --record
[root@k8s-master ~]# kubectl set image deploy nginx nginx=10.0.1.11:5000/nginx:latest
deployment "nginx" image updated
[root@k8s-master ~]# kubectl rollout history deployment nginx
deployments "nginx"
REVISION	CHANGE-CAUSE
1		kubectl run nginx --image=10.0.1.11:5000/nginx:1.13 --replicas=3 --record
2		kubectl set image deploy nginx nginx=10.0.1.11:5000/nginx:latest

namespace

资源隔离

#创建namespace
[root@k8s-master ~]# kubectl create namespace opesn
namespace "opesn" created
[root@k8s-master ~]# kubectl get namespace 
NAME          STATUS    AGE
default       Active    1d
kube-system   Active    1d
opesn         Active    2s
[root@k8s-master ~]# 

部署DashBoard

#编写DashBoard.yml配置文件
[root@k8s-master ~]# cat dashboard.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
# Keep the name in sync with image version and
# gce/coreos/kube-manifests/addons/dashboard counterparts
  name: kubernetes-dashboard-latest
  namespace: kube-system
spec:
  replicas: 1
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
        version: latest
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: kubernetes-dashboard
        image: 10.0.1.11:5000/kubernetes-dashboard-amd64:v1.4.1
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 50Mi
        ports:
        - containerPort: 9090
        args:
         -  --apiserver-host=http://10.0.1.11:8080
        livenessProbe:
          httpGet:
            path: /
            port: 9090
          initialDelaySeconds: 30
          timeoutSeconds: 30
[root@k8s-master ~]# 

#编写dashboard-svc.yaml 
[root@k8s-master ~]# cat dashboard-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: kubernetes-dashboard
  namespace: kube-system
  labels:
    k8s-app: kubernetes-dashboard
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    k8s-app: kubernetes-dashboard
  ports:
  - port: 80
    targetPort: 9090
[root@k8s-master ~]# 

#拉取dashboard镜像
[root@k8s-master ~]# docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/kubernetes-dashboard-amd64:v1.4.1
#上传镜像
[root@k8s-master ~]# docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/kubernetes-dashboard-amd64:v1.4.1 10.0.1.11:5000/kubernetes-dashboard-amd64:v1.4.1
[root@k8s-master ~]# docker push 10.0.1.11:5000/kubernetes-dashboard-amd64:v1.4.1 

#启动dashboar
[root@k8s-master ~]# kubectl create -f dashboard.yaml 
[root@k8s-master ~]# kubectl create -f dashboard-svc.yaml 

#验证:
[root@k8s-master ~]# kubectl get all --namespace=kube-system 
NAME                                 DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/kubernetes-dashboard-latest   1         1         1            1           9m

NAME                       CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
svc/kubernetes-dashboard   10.254.134.222   <none>        80/TCP    8m

NAME                                        DESIRED   CURRENT   READY     AGE
rs/kubernetes-dashboard-latest-1323922574   1         1         1         9m

NAME                                              READY     STATUS    RESTARTS   AGE
po/kubernetes-dashboard-latest-1323922574-40mdj   1/1       Running   0          9m
[root@k8s-master ~]# 

#浏览器访问
10.0.1.11:8080/ui

技术图片

heapster

Heapster是K8S集群监控工具,在1.2的时候,K8S的监控需要在node节点上运行cAdvisor作为agent收集本机和容器的资源数据,包括cpu、内存、网络、文件系统等。在新版本的K8S中,cAdvisor被集成到kubelet中。通过netstat可以查看到kubelet新开了一个4194的端口,这就是cAdvisor监听的端口,现在我们可以通过http://:4194的方式访问到cAdvisor。Heapster就是通过node上的kubelet,也就是实际的cAdvisor上收集数据并汇总,保存到后端存储中。

#编写配置文件
[root@k8s-master heapster-influxdb]# cat grafana-service.yaml 
apiVersion: v1
kind: Service
metadata:
  labels:
    kubernetes.io/cluster-service: ‘true‘
    kubernetes.io/name: monitoring-grafana
  name: monitoring-grafana
  namespace: kube-system
spec:
  # In a production setup, we recommend accessing Grafana through an external Loadbalancer
  # or through a public IP. 
  # type: LoadBalancer
  ports:
  - port: 80
    targetPort: 3000
  selector:
    name: influxGrafana

[root@k8s-master heapster-influxdb]# cat heapster-controller.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  labels:
    k8s-app: heapster
    name: heapster
    version: v6
  name: heapster
  namespace: kube-system
spec:
  replicas: 1
  selector:
    k8s-app: heapster
    version: v6
  template:
    metadata:
      labels:
        k8s-app: heapster
        version: v6
    spec:
      containers:
      - name: heapster
        image: 10.0.1.11:5000/heapster:canary
        imagePullPolicy: Always
        command:
        - /heapster
        - --source=kubernetes:http://10.0.1.11:8080?inClusterConfig=false
        - --sink=influxdb:http://monitoring-influxdb:8086

[root@k8s-master heapster-influxdb]# cat heapster-service.yaml 
apiVersion: v1
kind: Service
metadata:
  labels:
    kubernetes.io/cluster-service: ‘true‘
    kubernetes.io/name: Heapster
  name: heapster
  namespace: kube-system
spec:
  ports:
  - port: 80
    targetPort: 8082
  selector:
    k8s-app: heapster

[root@k8s-master heapster-influxdb]# cat influxdb-grafana-controller.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  labels:
    name: influxGrafana
  name: influxdb-grafana
  namespace: kube-system
spec:
  replicas: 1
  selector:
    name: influxGrafana
  template:
    metadata:
      labels:
        name: influxGrafana
    spec:
      containers:
      - name: influxdb
        image: 10.0.1.11:5000/heapster_influxdb:v0.5
        volumeMounts:
        - mountPath: /data
          name: influxdb-storage
      - name: grafana
        image: 10.0.1.11:5000/heapster_grafana:v2.6.0
        env:
          - name: INFLUXDB_SERVICE_URL
            value: http://monitoring-influxdb:8086
            # The following env variables are required to make Grafana accessible via
            # the kubernetes api-server proxy. On production clusters, we recommend
            # removing these env variables, setup auth for grafana, and expose the grafana
            # service using a LoadBalancer or a public IP.
          - name: GF_AUTH_BASIC_ENABLED
            value: "false"
          - name: GF_AUTH_ANONYMOUS_ENABLED
            value: "true"
          - name: GF_AUTH_ANONYMOUS_ORG_ROLE
            value: Admin
          - name: GF_SERVER_ROOT_URL
            value: /api/v1/proxy/namespaces/kube-system/services/monitoring-grafana/
        volumeMounts:
        - mountPath: /var
          name: grafana-storage
      volumes:
      - name: influxdb-storage
        emptyDir: {}
      - name: grafana-storage
        emptyDir: {}

[root@k8s-master heapster-influxdb]# cat influxdb-service.yaml 
apiVersion: v1
kind: Service
metadata:
  labels: null
  name: monitoring-influxdb
  namespace: kube-system
spec:
  ports:
  - name: http
    port: 8083
    targetPort: 8083
  - name: api
    port: 8086
    targetPort: 8086
  selector:
    name: influxGrafana
[root@k8s-master heapster-influxdb]# 

#上传镜像
[root@k8s-master ~]# docker tag docker.io/kubernetes/heapster_grafana:v2.6.0 10.0.1.11:5000/heapster_grafana:v2.6.0
[root@k8s-master ~]# docker tag docker.io/kubernetes/heapster_influxdb:v0.5  10.0.1.11:5000/heapster_influxdb:v0.5
[root@k8s-master ~]# docker tag docker.io/kubernetes/heapster:canary 10.0.1.11:5000/heapster:canary
[root@k8s-master ~]# docker push 10.0.1.11:5000/heapster_grafana:v2.6.0 
[root@k8s-master ~]# docker push 10.0.1.11:5000/heapster_influxdb:v0.5 
[root@k8s-master ~]# docker push 10.0.1.11:5000/heapster:canary 

#创建资源
[root@k8s-master heapster-influxdb]# kubectl create -f .

技术图片

弹性伸缩

#弹性伸缩配置命令
[root@k8s-master ~]# kubectl autoscale deployment nginx-deployment --max=8 --min=1 --cpu-percent=8
[root@k8s-master ~]# kubectl get all 
NAME                      DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deploy/nginx-deployment   1         1         1            1           2m

NAME                   REFERENCE                     TARGET    CURRENT   MINPODS   MAXPODS   AGE
hpa/nginx-deployment   Deployment/nginx-deployment   8%        0%        1         8         1m

NAME                   CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
svc/kubernetes         10.254.0.1       <none>        443/TCP        2m
svc/nginx-deployment   10.254.184.154   <nodes>       80:30000/TCP   2m

NAME                             DESIRED   CURRENT   READY     AGE
rs/nginx-deployment-2912237156   1         1         1         2m

NAME                                   READY     STATUS    RESTARTS   AGE
po/nginx-deployment-2912237156-7kn24   1/1       Running   0          2m
[root@k8s-master ~]# 

K8S配置DNS

[root@k8s-master ~]# cat skydns-rc.yaml
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# TODO - At some point, we need to rename all skydns-*.yaml.* files to kubedns-*.yaml.*
# Should keep target in cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
# in sync with this file.

# __MACHINE_GENERATED_WARNING__

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
spec:
  replicas: 1
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    rollingUpdate:
      maxSurge: 10%
      maxUnavailable: 0
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ‘‘
        scheduler.alpha.kubernetes.io/tolerations: ‘[{"key":"CriticalAddonsOnly", "operator":"Exists"}]‘
    spec:
      containers:
      - name: kubedns
        image: myhub.fdccloud.com/library/kubedns-amd64:1.9
        resources:
          # TODO: Set memory limits when we‘ve profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn‘t backoff from restarting it.
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        livenessProbe:
          httpGet:
            path: /healthz-kubedns
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that‘s available.
          initialDelaySeconds: 3
          timeoutSeconds: 5
        args:
        - --domain=cluster.local.
        - --dns-port=10053
        - --config-map=kube-dns
        - --kube-master-url=http://10.0.1.11:8080
        # This should be set to v=2 only after the new image (cut from 1.5) has
        # been released, otherwise we will flood the logs.
        - --v=0
        #__PILLAR__FEDERATIONS__DOMAIN__MAP__
        env:
        - name: PROMETHEUS_PORT
          value: "10055"
        ports:
        - containerPort: 10053
          name: dns-local
          protocol: UDP
        - containerPort: 10053
          name: dns-tcp-local
          protocol: TCP
        - containerPort: 10055
          name: metrics
          protocol: TCP
      - name: dnsmasq
        image: myhub.fdccloud.com/library/kube-dnsmasq-amd64:1.4
        livenessProbe:
          httpGet:
            path: /healthz-dnsmasq
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --cache-size=1000
        - --no-resolv
        - --server=127.0.0.1#10053
        #- --log-facility=-
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        # see: https://github.com/kubernetes/kubernetes/issues/29055 for details
        resources:
          requests:
            cpu: 150m
            memory: 10Mi
      - name: dnsmasq-metrics
        image: myhub.fdccloud.com/library/dnsmasq-metrics-amd64:1.0
        livenessProbe:
          httpGet:
            path: /metrics
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --v=2
        - --logtostderr
        ports:
        - containerPort: 10054
          name: metrics
          protocol: TCP
        resources:
          requests:
            memory: 10Mi
      - name: healthz
        image: myhub.fdccloud.com/library/exechealthz-amd64:1.2
        resources:
          limits:
            memory: 50Mi
          requests:
            cpu: 10m
            # Note that this container shouldn‘t really need 50Mi of memory. The
            # limits are set higher than expected pending investigation on #29688.
            # The extra memory was stolen from the kubedns container to keep the
            # net memory requested by the pod constant.
            memory: 50Mi
        args:
        - --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null
        - --url=/healthz-dnsmasq
        - --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null
        - --url=/healthz-kubedns
        - --port=8080
        - --quiet
        ports:
        - containerPort: 8080
          protocol: TCP
      dnsPolicy: Default  # Don‘t use cluster DNS.

[root@k8s-master ~]# cat skydns-svc.yaml 
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# TODO - At some point, we need to rename all skydns-*.yaml.* files to kubedns-*.yaml.*

# Warning: This is a file generated from the base underscore template file: skydns-svc.yaml.base

apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.254.230.254
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
[root@k8s-master ~]# 

#启动
[root@k8s-master ~]# kubectl create -f skydns-rc.yaml 
deployment "kube-dns" created
[root@k8s-master ~]# kubectl create -f skydns-svc.yaml 
service "kube-dns" created
[root@k8s-master ~]# 

#修改node节点kubelet配置
[root@k8s-node01 ~]# vim /etc/kubernetes/kubelet 
KUBELET_ARGS="--cluster_dns=10.254.230.254 --cluster_domain=cluster.local"
[root@k8s-node01 ~]# systemctl restart kubelet.service 

Volume

在Docker的设计实现中,容器中的数据是临时的,即当容器被销毁时,其中的数据将会丢失.如果需要持久化数据,需要使用Docker数据卷挂载宿主机上的文件或者目录到容器中,在K8S中,当Pod重建的时候,数据是会丢失的,K8S也是通过数据卷挂载来提供Pod数据的持久化的.K8S数据卷是对Dcoker数据卷的扩展,K8S数据卷是POD级别,可以用来实现POD中容器的文件共享.

PV :持久化卷,提供资源池
PVC : 持久化卷,分配

#pod持久化配置
#安装配置nfs服务器
[root@k8s-master ~]# yum -y install nfs-utils.x86_64
[root@k8s-master ~]# vim /etc/exports
/data/mysql 10.0.1.0/24(rw,async,no_root_squash,no_all_squash)
[root@k8s-master ~]# mkdir -p /data/mysql
[root@k8s-master ~]# systemctl start rpcbind
[root@k8s-master ~]# systemctl start nfs

#node节点安装nfs客户端
[root@k8s-node01 ~]# yum -y install nfs-utils.x86_64 
[root@k8s-node01 ~]# showmount -e 10.0.1.11
Export list for 10.0.1.11:
/data/mysql 10.0.1.0/24
[root@k8s-node01 ~]# 

#创建pv
[root@k8s-master mysql]# cat mysql_pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql
  labels:
    type: nfs001
spec:
  capacity:
    storage: 10Gi 
  accessModes:
    - ReadWriteMany 
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: "/data/mysql"
    server: 10.0.1.11
    readOnly: false

[root@k8s-master mysql]# kubectl create -f mysql_pv.yaml
persistentvolume "mysql" created
[root@k8s-master mysql]# kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM     REASON    AGE
mysql     10Gi       RWX           Recycle         Available                       9s
[root@k8s-master mysql]# 

#创建pvc
[root@k8s-master mysql]# cat mysql_pvc.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: mysql
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
[root@k8s-master mysql]# kubectl create -f mysql_pvc.yaml
persistentvolumeclaim "mysql" created
[root@k8s-master mysql]# kubectl get pvc
NAME      STATUS    VOLUME    CAPACITY   ACCESSMODES   AGE
mysql     Bound     mysql     10Gi       RWX           5s
[root@k8s-master mysql]# kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM           REASON    AGE
mysql     10Gi       RWX           Recycle         Bound     default/mysql             55s
[root@k8s-master mysql]# 

#创建mysql镜像资源
[root@k8s-master mysql]# cat mysql-rc-pvc.yml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: mysql
spec:
  replicas: 1
  selector:
    app: mysql
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
        - name: mysql
          image: 10.0.1.11:5000/mysql:5.7
          ports:
          - containerPort: 3306
          env:
          - name: MYSQL_ROOT_PASSWORD
            value: ‘123456‘
          volumeMounts:
          - name: nfs
            mountPath: /var/lib/mysql
      volumes:
      - name: nfs
        persistentVolumeClaim:
          claimName: mysql
[root@k8s-master mysql]# 
[root@k8s-master mysql]# kubectl create -f mysql-rc-pvc.yml
replicationcontroller "mysql" created
[root@k8s-master mysql]# 
[root@k8s-master mysql]# ls /data/mysql/
auto.cnf  ib_buffer_pool  ibdata1  ib_logfile0  ib_logfile1  ibtmp1  mysql  performance_schema  sys
[root@k8s-master mysql]# 

jenkins + k8s 实现自动更新

主机名 IP地址
k8s-master 10.0.1.11
k8s-node01 10.0.1.12
k8s-node02 10.0.1.13
gitlab 10.0.1.80
#node02安装jenkins
[root@k8s-node01 ~]# rpm -ivh jdk-8u181-linux-x64.rpm 
[root@k8s-node01 ~]# rpm -ivh jenkins-2.99-1.1.noarch.rpm 
[root@k8s-node01 ~]# vim /etc/sysconfig/jenkins
JENKINS_USER="root"
#启动jenkins
[root@jenkins ~]# systemctl start jenkins
[root@k8s-node01 ~]# 

技术图片

#gitlab安装gitlab
[root@gitlab ~]# yum -y install policycoreutils-python openssh-server
[root@gitlab ~]# rpm -ivh gitlab-ce-10.2.2-ce.0.el7.x86_64.rpm
[root@gitlab ~]# vim /etc/gitlab/gitlab.rb
external_url ‘http://10.0.1.80‘
[root@gitlab ~]# gitlab-ctl reconfigure

技术图片

技术图片

构建项目

##gitlab主机
#配置gitlab用户及邮箱
[root@gitlab dzp]# git config --global user.name "Administrator"
[root@gitlab dzp]# git config --global user.email "admin@example.com"
#准备代码文件
[root@gitlab dzp]# ll html/
total 12
drwxr-xr-x 2 root root   51 Feb 12  2017 css
drwxr-xr-x 2 root root   83 Feb 12  2017 fonts
drwxr-xr-x 2 root root   41 Feb 12  2017 images
-rw-r--r-- 1 root root 3073 Feb 12  2017 index.html
-rw-r--r-- 1 root root  268 Dec 14  2014 jQuery之家.url
drwxr-xr-x 2 root root   49 Feb 12  2017 js
-rw-r--r-- 1 root root  865 Oct 10  2014 readme.html
[root@gitlab dzp]# 
#编写dockerfile
[root@gitlab dzp]# cat dockerfile 
FROM 10.0.1.11:5000/nginx:1.13
ADD html /usr/share/nginx/html
[root@gitlab dzp]# 
#上传gitlab
[root@gitlab dzp]# git init
[root@gitlab dzp]# git remote add origin git@10.0.1.80:root/opesn.git
[root@gitlab dzp]# git add .
[root@gitlab dzp]# git commit -m "Initial commit"
[root@gitlab dzp]# git push -u origin master

#k8s-master
[root@k8s-master ~]# kubectl create namespace dzp
namespace "dzp" created
[root@k8s-master ~]# 

技术图片

技术图片

技术图片

docker build -t 10.0.1.11:5000/dzp:v${BUILD_ID} .
docker push 10.0.1.11:5000/dzp:v${BUILD_ID}
sshpass -p123456 ssh root@10.0.1.11 "kubectl run dzp --image=10.0.1.11:5000/dzp:v${BUILD_ID} --replicas=1 --record --namespace=dzp"

技术图片

#编写Service
[root@k8s-master ~]# cat nginx-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: dzp
  namespace: dzp
spec:
  type: NodePort
  ports:
    - port: 80
      targetPort: 80
  selector:
    run: dzp

#启动service
[root@k8s-master ~]# kubectl create -f nginx-svc.yaml 

技术图片

再次修改构建可以自动更新版本

技术图片

docker build -t 10.0.1.11:5000/dzp:v${BUILD_ID} .
docker push 10.0.1.11:5000/dzp:v${BUILD_ID}
sshpass -p123456 ssh root@10.0.1.11 "kubectl set image deploy dzp --namespace=dzp dzp=10.0.1.11:5000/dzp:v${BUILD_ID}"









以上是关于Kubernetes--应用滚动升级的主要内容,如果未能解决你的问题,请参考以下文章

K8S的基础概念

Kubernetes--应用滚动升级

Kubernetes 实战 -- 泛 kubernates 导论

kubernates 组件(第二集)

kubernate 分布式练习-redis-master创建

kubernates入门