云原生之kubernetes实战在k8s集群下部署Weave Scope监控平台

Posted 江湖有缘

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了云原生之kubernetes实战在k8s集群下部署Weave Scope监控平台相关的知识,希望对你有一定的参考价值。

【云原生之kubernetes实战】在k8s集群下部署Weave Scope监控平台

一、Weave Scope介绍

1.Weave Scope简介

Weave Scope 是一款 Docker 和 Kubernetes 的可视化监控工具。它提供了自上而下的应用程序视图以及整个基础架构视图,用户可以轻松对分布式的容器化应用进行实时监控和问题诊断.

2.Weave Scope的特点

1.交互式拓扑界面
2.图形模式和表格模式
3.过滤功能
4.搜索功能
5.实时度量
6.容器排错
7.插件扩展

3.Weave Scope的组成

Probe Agent:负责收集容器和宿主的信息,发送给App。
App:负责处理收集的信息,生成相应报告,并以交互界面的形式展示。

二、检查本地kubernetes集群状态

1.检查工作节点状态

[root@k8s-master ~]# kubectl get nodes -owide
NAME         STATUS   ROLES                  AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
k8s-master   Ready    control-plane,master   7d15h   v1.23.1   192.168.3.201   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   containerd://1.6.6
k8s-node01   Ready    <none>                 7d15h   v1.23.1   192.168.3.202   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   containerd://1.6.6
k8s-node02   Ready    <none>                 7d15h   v1.23.1   192.168.3.203   <none>        CentOS Linux 7 (Core)   3.10.0-957.el7.x86_64   containerd://1.6.6

2.检查系统pod状态

[root@k8s-master ~]# kubectl get pods -n kube-system 
NAME                                       READY   STATUS    RESTARTS      AGE
calico-kube-controllers-7bc6547ffb-2nf66   1/1     Running   1 (23h ago)   7d15h
calico-node-8c4pn                          1/1     Running   1 (27h ago)   7d15h
calico-node-f28qq                          1/1     Running   1 (23h ago)   7d15h
calico-node-wmc2j                          1/1     Running   1 (23h ago)   7d15h
coredns-6d8c4cb4d-6gm4x                    1/1     Running   1 (23h ago)   7d15h
coredns-6d8c4cb4d-7vxlz                    1/1     Running   1 (23h ago)   7d15h
etcd-k8s-master                            1/1     Running   1 (23h ago)   7d15h
kube-apiserver-k8s-master                  1/1     Running   1 (23h ago)   7d15h
kube-controller-manager-k8s-master         1/1     Running   1 (23h ago)   7d15h
kube-proxy-8dfw8                           1/1     Running   1 (23h ago)   7d15h
kube-proxy-ghzrv                           1/1     Running   1 (23h ago)   7d15h
kube-proxy-j867z                           1/1     Running   1 (27h ago)   7d15h
kube-scheduler-k8s-master                  1/1     Running   1 (23h ago)   7d15h

三、安装nfs共享存储

1.安装nfs

 yum install -y nfs-utils

2.创建共享目录

mkdir -p /nfs/data

3.配置共享目录

echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports

4.使配置生效

exportfs -r

5.重启nfs相关服务

①设置nfs服务开机启动

 systemctl enable --now rpcbind
 systemctl enable --now  nfs-server

②重启nfs服务

service rpcbind stop
service nfs stop
service rpcbind start
service nfs start

6.其他节点检查nfs共享

[root@k8s-node01 ~]#  showmount -e 192.168.3.201
Export list for 192.168.3.201:
/nfs/data *

四、配置storageclass

1.编写sc.yaml文件

[root@k8s-master scope]# cat sc.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
 name: nfs-storage
 annotations:
   storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
 archiveOnDelete: "true"  

---
apiVersion: apps/v1
kind: Deployment
metadata:
 name: nfs-client-provisioner
 labels:
   app: nfs-client-provisioner
 # replace with namespace where provisioner is deployed
 namespace: default
spec:
 replicas: 1
 strategy:
   type: Recreate
 selector:
   matchLabels:
     app: nfs-client-provisioner
 template:
   metadata:
     labels:
       app: nfs-client-provisioner
   spec:
     serviceAccountName: nfs-client-provisioner
     containers:
       - name: nfs-client-provisioner
         image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
         # resources:
         #    limits:
         #      cpu: 10m
         #    requests:
         #      cpu: 10m
         volumeMounts:
           - name: nfs-client-root
             mountPath: /persistentvolumes
         env:
           - name: PROVISIONER_NAME
             value: k8s-sigs.io/nfs-subdir-external-provisioner
           - name: NFS_SERVER
             value: 192.168.3.201 ## 指定自己nfs服务器地址
           - name: NFS_PATH  
             value: /nfs/data  ## nfs服务器共享的目录
     volumes:
       - name: nfs-client-root
         nfs:
           server: 192.168.3.201
           path: /nfs/data
---
apiVersion: v1
kind: ServiceAccount
metadata:
 name: nfs-client-provisioner
 # replace with namespace where provisioner is deployed
 namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
 name: nfs-client-provisioner-runner
rules:
 - apiGroups: [""]
   resources: ["nodes"]
   verbs: ["get", "list", "watch"]
 - apiGroups: [""]
   resources: ["persistentvolumes"]
   verbs: ["get", "list", "watch", "create", "delete"]
 - apiGroups: [""]
   resources: ["persistentvolumeclaims"]
   verbs: ["get", "list", "watch", "update"]
 - apiGroups: ["storage.k8s.io"]
   resources: ["storageclasses"]
   verbs: ["get", "list", "watch"]
 - apiGroups: [""]
   resources: ["events"]
   verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
 name: run-nfs-client-provisioner
subjects:
 - kind: ServiceAccount
   name: nfs-client-provisioner
   # replace with namespace where provisioner is deployed
   namespace: default
roleRef:
 kind: ClusterRole
 name: nfs-client-provisioner-runner
 apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
 name: leader-locking-nfs-client-provisioner
 # replace with namespace where provisioner is deployed
 namespace: default
rules:
 - apiGroups: [""]
   resources: ["endpoints"]
   verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
 name: leader-locking-nfs-client-provisioner
 # replace with namespace where provisioner is deployed
 namespace: default
subjects:
 - kind: ServiceAccount
   name: nfs-client-provisioner
   # replace with namespace where provisioner is deployed
   namespace: default
roleRef:
 kind: Role
 name: leader-locking-nfs-client-provisioner
 apiGroup: rbac.authorization.k8s.io

2.应用sc.yaml文件

[root@k8s-master scope]# 
[root@k8s-master scope]# kubectl apply -f sc.yaml 
storageclass.storage.k8s.io/nfs-storage created
deployment.apps/nfs-client-provisioner created
serviceaccount/nfs-client-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-client-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-client-provisioner created
role.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created
rolebinding.rbac.authorization.k8s.io/leader-locking-nfs-client-provisioner created

3.查看sc资源对象

[root@k8s-master scope]# kubectl get sc
NAME                    PROVISIONER                                   RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
nfs-storage (default)   k8s-sigs.io/nfs-subdir-external-provisioner   Delete          Immediate           false                  6m34s

五、安装ingress负载均衡器

1.下载ingress-nginx的yaml文件

wget 'https://oss-public.obs.cn-south-1.myhuaweicloud.com:443/ingress-nginx/ingress-nginx.yml?AccessKeyId=8QZQXILP1SCWCCLMSGIH&Expires=1660039750&Signature=2QsNqXejoifFVJjaJl7XSa88AgY%3D'

2.创建负载均衡器

[root@k8s-master scope]# kubectl apply -f ingress-nginx.yml 
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
[root@k8s-master scope]# 

3.查看ingress状态

[root@k8s-master scope]# kubectl get pods -n ingress-nginx -owide
NAME                                        READY   STATUS      RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES
ingress-nginx-admission-create-79cm5        0/1     Completed   0          34m   10.244.85.193   k8s-node01   <none>           <none>
ingress-nginx-admission-patch-jbz68         0/1     Completed   0          34m   10.244.85.194   k8s-node01   <none>           <none>
ingress-nginx-controller-7bcfbb6786-tdv6n   1/1     Running     0          34m   192.168.3.203   k8s-node02   <none>           <none>

六、安装Weave Scope服务端

1.创建命名空间

[root@k8s-master scope]# kubectl create namespace weave
namespace/weave created

2.编辑scope-app.yaml

[root@k8s-master scope]# cat scope-app.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: weave

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: weave-scope
  namespace: weave
  labels:
    name: weave-scope

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: weave-scope
  labels:
    name: weave-scope
rules:
  - apiGroups:
      - ''
    resources:
      - pods
    verbs:
      - get
      - list
      - watch
      - delete
  - apiGroups:
      - ''
    resources:
      - pods/log
      - services
      - nodes
      - namespaces
      - persistentvolumes
      - persistentvolumeclaims
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - apps
    resources:
      - deployments
      - daemonsets
      - statefulsets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - batch
    resources:
      - cronjobs
      - jobs
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
    resources:
      - deployments
      - daemonsets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - apps
    resources:
      - deployments/scale
    verbs:
      - get
      - update
  - apiGroups:
      - extensions
    resources:
      - deployments/scale
    verbs:
      - get
      - update
  - apiGroups:
      - storage.k8s.io
    resources:
      - storageclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - volumesnapshot.external-storage.k8s.io
    resources:
      - volumesnapshots
      - volumesnapshotdatas
    verbs:
      - list
      - watch

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: weave-scope
  labels:
    name: weave-scope
roleRef:
  kind: ClusterRole
  name: weave-scope
  apiGroup: rbac.authorization.k8s.io
subjects:
  - kind: ServiceAccount
    name: weave-scope
    namespace: weave

---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: scope-app
  namespace: weave
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: abc.scope.com
    http:
      paths:
      - backend:
          service:
            name: weave-scope-app
            port: 
              number: 80
        path: /
        pathType: Prefix



---
apiVersion: v1
kind: Service
metadata:
  name: weave-scope-app
  namespace: weave
  labels:
    name: weave-scope-app
    app: weave-scope
    weave-cloud-component: scope
    weave-scope-component: app
spec:
  ports:
    - name: app
      port: 80
      protocol: TCP
      targetPort: 4040
#      nodePort: 31232
  selector:
    name: weave-scope-app
    app: weave-scope
    weave-cloud-component: scope
    weave-scope-component: app
 # type: NodePort
    
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: weave-scope-app
  namespace: weave
  labels:
    name: weave-scope-app
    app: weave-scope
    weave-cloud-component: scope
    weave-scope-component: app
spec:
  replicas: 1
  selector:
    matchLabels:
      name: weave-scope-app
      app: weave-scope
      weave-cloud-component: scope
      weave-scope-component: app
  template:
    metadata:
      labels:
        name: weave-scope-app
        app: weave-scope
        weave-cloud-component: scope
        weave-scope-component: app
    spec:
      containers:
        - name: app
          image: docker.io/weaveworks/scope:1.13.1
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 4040
              protocol: TCP
          args:
            - '--mode=app'
          command:
            - /home/weave/scope
          env: []


3.应用scope-app.yaml文件

kubectl apply -f scope-app.yaml

4.查看pod状态

[root@k8s-master scope]# kubectl get pod -n weave 
NAME                               READY   STATUS    RESTARTS   AGE
weave-scope-app-75df8f8754-kr9mv   1/1     Running   0          11m

七、安装Wea

以上是关于云原生之kubernetes实战在k8s集群下部署Weave Scope监控平台的主要内容,如果未能解决你的问题,请参考以下文章

云原生之kubernetes实战在k8s下部署Redis集群

云原生之kubernetes实战在k8s集群下部署portainer-k8s平台

云原生之kubernetes实战在k8s集群下部署Weave Scope监控平台

云原生之kubernetes实战在k8s集群下部署ingress对外访问服务

云原生之kubernetes实战在k8s环境下部署Heimdall导航页

云原生之kubernetes实战使用helm在k8s集群下部署DataEase可视化分析平台