[kubernetes] 交付dubbo之持续交付dubbo-monitor和dubbo-consumer(完结篇)

Posted 运维少年

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了[kubernetes] 交付dubbo之持续交付dubbo-monitor和dubbo-consumer(完结篇)相关的知识,希望对你有一定的参考价值。

    做这个实验目标是往kubernetes集群里交付dubbo服务,采用jenkins+maven+gitee实现,jenkins使用的是v2.303.1,maven使用3.6.3+jre8u91。整个项目的拓扑图如下:

主机

作用

host11

zk节点、为整个集群提供dns、nginx为k8s集群提供proxy

host12

zk节点、nginx为k8s集群提供proxy

host21

k8s-node、zk节点、etcd节点

host22

k8s-node、etcd节点

host200

harbor仓库节点、httpd为集群提供资源配置清单文件,nfs为pod提供持久化存储

[kubernetes]

交付逻辑图:

[kubernetes]


01 交付dubbo-monitor

    dubbo-monitor主要是为dubbo使用的zk提供一个可视化的界面,可以看到有多少个消费者和提供者。

    1)创建dubbo-monitor目录(host200)

mkdir -p /data/dockerfile/dubbo-monitor
cd !$

    2)下载https://github.com/jeromefromcn/dubbo-monitor 项目代码到dubbo-monitor目录。

    3)修改配置文件 -- host200

vi dubbo-monitor-simple/conf/dubbo_origin.properties
dubbo.container=log4j,spring,registry,jetty
dubbo.application.name=dubbo-monitor # 任意写
dubbo.application.owner=od # 任意写
dubbo.registry.address=zookeeper://zk1.od.com:2181?backup=zk2.od.com:2181,zk3.od.com:2181 #zk集群地址
#dubbo.registry.address=zookeeper://127.0.0.1:2181
#dubbo.registry.address=redis://127.0.0.1:6379
#dubbo.registry.address=dubbo://127.0.0.1:9090
dubbo.protocol.port=20880
dubbo.jetty.port=8080
dubbo.jetty.directory=/dubbo-monitor-simple/monitor
dubbo.charts.directory=/dubbo-monitor-simple/charts
dubbo.statistics.directory=/dubbo-monitor-simple/monitor/statistics
dubbo.log4j.file=logs/dubbo-monitor-simple.log
dubbo.log4j.level=WARN

    4)修改start.sh文件,将资源配置修改为128M,65行以后的全部删除。

vi /data/dockerfile/dubbo-monitor/dubbo-monitor-simple/bin/start.sh

[kubernetes]

    5)创建Dockerfile文件

vi /data/dockerfile/dubbo-monitor/Dockerfile
FROM jeromefromcn/docker-alpine-java-bash
COPY dubbo-monitor-simple/ /dubbo-monitor-simple/
CMD /dubbo-monitor-simple/bin/start.sh

    6)生成镜像

docker bulid . -t harbor.od.com/infra/dubbo-monitor:latest

    7)创建配置清单 -- host200    

mkdir /data/k8s-yaml/dubbo-monitor
# dp
cat dp.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: dubbo-monitor
namespace: infra
labels:
name: dubbo-monitor
spec:
replicas: 1
selector:
matchLabels:
name: dubbo-monitor
template:
metadata:
labels:
app: dubbo-monitor
name: dubbo-monitor
spec:
containers:
- name: dubbo-monitor
image: harbor.od.com/infra/dubbo-monitor:latest
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 20880
protocol: TCP
imagePullPolicy: IfNotPresent
imagePullSecrets:
- name: harbor
restartPolicy: Always
terminationGracePeriodSeconds: 30
securityContext:
runAsUser: 0
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
revisionHistoryLimit: 7
progressDeadlineSeconds: 600

# svc
cat svc.yaml
kind: Service
apiVersion: v1
metadata:
name: dubbo-monitor
namespace: infra
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
selector:
app: dubbo-monitor
# ingress
cat ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: dubbo-monitor
namespace: infra
spec:
rules:
- host: dubbo-monitor.od.com
http:
paths:
- path: /
backend:
serviceName: dubbo-monitor
servicePort: 8080

    8)在node上应用

kubectl apply -f http://k8s-yaml.od.com/dubbo-monitor/dp.yaml
kubectl apply -f http://k8s-yaml.od.com/dubbo-monitor/svc.yaml
kubectl apply -f http://k8s-yaml.od.com/dubbo-monitor/ingress.yaml

    9)查看dubbo-monitor资源

[root@host21 ~]# kubectl get all -n infra| grep monitor
pod/dubbo-monitor-5bb45c8b97-pfq26 1/1 Running 3 13d
service/dubbo-monitor ClusterIP 10.254.184.23 <none> 8080/TCP 14d
deployment.apps/dubbo-monitor 1/1 1 1 14d
replicaset.apps/dubbo-monitor-5bb45c8b97 1 1 1 14d

    10)修改named配置文件 -- host11

vi /var/named/od.com.zone
# 重启服务
systemctl restart named

    11)访问monitor,可以看到dubbo-demo-service已经存在providers

[kubernetes]


02 克隆一个dubbo-client项目 

1)登录gitee.com,克隆一个dubbo-client项目,请参考之前的交付dubbo-server篇。

​[kubernetes] 交付dubbo之jenkins持续交付dubbo-server​


03 交付dubbo消费者

    1)打开blue ocean

[kubernetes]

    2)选择dubbo-demo

[kubernetes]

    3)点击运行

[kubernetes]

    4)配置参数

app_name:dubbo-demo-consumer
image_name:app/dubbo-demo-consumer
git_repo:git@gitee.com:xxxx/dubbo-demo-web.git
git_version:master
add_tag:211004_1024
mvn_dir:./
target_dir:./dubbo-client/target/
mvn_cmd:默认
base_image:base/jre:8u112
maven:3.6.3-8u291

[kubernetes]

    5)run起来

[kubernetes]

    6)去harbor查看是否有镜像了

[kubernetes]

    7)创建消费者资源配置清单

mkdir -pv /data/k8s-yaml/dubbo-consume
# dp
dp.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: dubbo-demo-consumer
namespace: app
labels:
name: dubbo-demo-consumer
spec:
replicas: 1
selector:
matchLabels:
name: dubbo-demo-consumer
template:
metadata:
labels:
app: dubbo-demo-consumer
name: dubbo-demo-consumer
spec:
containers:
- name: dubbo-demo-consumer
image: harbor.od.com/app/dubbo-demo-consumer:master_211004_1024
ports:
- containerPort: 8080
protocol: TCP
- containerPort: 20880
protocol: TCP
env:
- name: JAR_BALL
value: dubbo-client.jar
imagePullPolicy: IfNotPresent
imagePullSecrets:
- name: harbor
restartPolicy: Always
terminationGracePeriodSeconds: 30
securityContext:
runAsUser: 0
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
revisionHistoryLimit: 7
progressDeadlineSeconds: 600
# svc
cat svc.yaml
kind: Service
apiVersion: v1
metadata:
name: dubbo-demo-consumer
namespace: app
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
selector:
app: dubbo-demo-consumer
# ingress
cat ingress.yaml
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: dubbo-demo-consumer
namespace: app
spec:
rules:
- host: demo.od.com
http:
paths:
- path: /
backend:
serviceName: dubbo-demo-consumer
servicePort: 8080

    8)在node节点应用

kubectl apply -f http://k8s-yaml.od.com/dubbo-consumer/dp.yaml
kubectl apply -f http://k8s-yaml.od.com/dubbo-consumer/svc.yaml
kubectl apply -f http://k8s-yaml.od.com/dubbo-consumer/ingree.yaml

    9)查看消费者资源情况

[root@host21 ~]# kubectl get all -n app | grep consumer
pod/dubbo-demo-consumer-676b9d45fc-8k7jp 1/1 Running 2 13d
service/dubbo-demo-consumer ClusterIP 10.254.41.69 <none> 8080/TCP 14d
deployment.apps/dubbo-demo-consumer 1/1 1 1 14d
replicaset.apps/dubbo-demo-consumer-5b6bdd8f9c 0 0 0 14d
replicaset.apps/dubbo-demo-consumer-676b9d45fc 1 1 1 13d
replicaset.apps/dubbo-demo-consumer-86c9ff44b 0 0 0 14d
replicaset.apps/dubbo-demo-consumer-8d88d957d 0 0 0 14d

    10)查看dubbo-monitor信息,可以看到已经有消费者了

[kubernetes]

    11)添加dns解析

vi /var/named/od.com.zone
demo A 192.168.122.10
# 重启服务
systemctl restart named

    12)测试消费者

https://demo.od.com/hello?name=运维少年

[kubernetes]

    13)来看下整个流程

    在消费者代码中,掉用helloService.hello(name)方法,这个helloService模块,调用hello方法,但是这个hello方法在在本地没有有声明。他是Service提供者声明的。

[kubernetes]

    Service提供者声明的hello方法。所以结论:在就好像是调用本地方法一样

[kubernetes]


04 总结

    使用这种模式进行交付,在业务高峰期可以直接扩容消费者,前端用户无感知。且在研发重新commit代码之后,可以使用commit id进行构建镜像,达到快速发布的效果。

以上是关于[kubernetes] 交付dubbo之持续交付dubbo-monitor和dubbo-consumer(完结篇)的主要内容,如果未能解决你的问题,请参考以下文章

[kubernetes] 交付dubbo之zookeeper安装配置

[kubernetes] 交付dubbo之jenkins联动docker

实战交付一套dubbo微服务到k8s集群之使用Jenkins进行持续构建交付dubo服务的提供者

[kubernetes] 交付dubbo之在jenkins配置多jre版本的maven

云原生 | kubernetes - Argo CD 持续交付

有货基于Kubernetes容器环境的持续交付实践