第156天学习打卡(Kubernetes 搭建监控平台 高可用集群部署 )

Posted doudoutj

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了第156天学习打卡(Kubernetes 搭建监控平台 高可用集群部署 )相关的知识,希望对你有一定的参考价值。

搭建监控平台

第一步部署 prometheus

  • configmay.yaml
  • prometheus.deploy.yaml
  • prometheus.svc.yml
  • rbac-setup.yaml
  • node-exporter.yaml

部署守护进程

node-exporter.yaml

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: kube-system
  labels:
    k8s-app: node-exporter
spec:
  selector:
    matchLabels:
      k8s-app: node-exporter
  template:
    metadata:
      labels:
        k8s-app: node-exporter
    spec:
      containers:
      - image: prom/node-exporter
        name: node-exporter
        ports:
        - containerPort: 9100
          protocol: TCP
          name: http
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: node-exporter
  name: node-exporter
  namespace: kube-system
spec:
  ports:
  - name: http
    port: 9100
    nodePort: 31672
    protocol: TCP
  type: NodePort
  selector:
    k8s-app: node-exporter

[root@master ~]# mkdir pgmonitor
[root@master ~]# cd pgmonitor/
[root@master pgmonitor]# ls
[root@master pgmonitor]# ls
grafana  node-exporter.yaml  prometheus
[root@master pgmonitor]# vim node-exporter.yaml
[root@master pgmonitor]# kubectl create -f node-exporter.yaml
daemonset.apps/node-exporter created
service/node-exporter created

部署其他yaml文件

[root@master pgmonitor]# cd prometheus/
[root@master prometheus]# ls
configmap.yaml  prometheus.deploy.yml  prometheus.svc.yml  rbac-setup.yaml
[root@master prometheus]# kubectl create -f rbac-setup.yaml
clusterrole.rbac.authorization.k8s.io/prometheus created
serviceaccount/prometheus created
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
[root@master prometheus]# kubectl create -f configmap.yaml
configmap/prometheus-config created
[root@master prometheus]# vim prometheus.deploy.yml
[root@master prometheus]# kubectl create -f prometheus.deploy.yml
deployment.apps/prometheus created
[root@master prometheus]# kubectl create -f prometheus.svc.yml
service/prometheus created
[root@master prometheus]# kubectl get pods -n kube-system #查看,默认在kube-system中
NAME                             READY   STATUS    RESTARTS   AGE
coredns-7f89b7bc75-jx925         1/1     Running   3          7d19h
coredns-7f89b7bc75-pml7q         1/1     Running   3          7d19h
etcd-master                      1/1     Running   9          7d19h
kube-apiserver-master            1/1     Running   9          7d19h
kube-controller-manager-master   1/1     Running   0          5d
kube-flannel-ds-7kmgr            1/1     Running   3          7d18h
kube-flannel-ds-hmqcw            1/1     Running   4          7d18h
kube-flannel-ds-mql8h            1/1     Running   11         7d18h
kube-flannel-ds-x565h            1/1     Running   3          7d18h
kube-proxy-cgh25                 1/1     Running   3          7d19h
kube-proxy-lt5d7                 1/1     Running   5          7d19h
kube-proxy-nlqrt                 1/1     Running   3          7d19h
kube-proxy-vw255                 1/1     Running   10         7d19h
kube-scheduler-master            1/1     Running   0          5d
node-exporter-5cdp9              1/1     Running   0          16m
node-exporter-ggnxc              1/1     Running   0          16m
node-exporter-tv8ld              1/1     Running   0          16m
prometheus-68546b8d9-zwd4h       1/1     Running   0          5m20s


rbac-setup.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: prometheus
rules:
- apiGroups: [""]
  resources:
  - nodes
  - nodes/proxy
  - services
  - endpoints
  - pods
  verbs: ["get", "list", "watch"]
- apiGroups:
  - extensions
  resources:
  - ingresses
  verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
  verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: prometheus
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: prometheus
subjects:
- kind: ServiceAccount
  name: prometheus
  namespace: kube-system

configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: prometheus-config
  namespace: kube-system
data:
  prometheus.yml: |
    global:
      scrape_interval:     15s
      evaluation_interval: 15s
    scrape_configs:

    - job_name: 'kubernetes-apiservers'
      kubernetes_sd_configs:
      - role: endpoints
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
        action: keep
        regex: default;kubernetes;https

    - job_name: 'kubernetes-nodes'
      kubernetes_sd_configs:
      - role: node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics

    - job_name: 'kubernetes-cadvisor'
      kubernetes_sd_configs:
      - role: node
      scheme: https
      tls_config:
        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
      relabel_configs:
      - action: labelmap
        regex: __meta_kubernetes_node_label_(.+)
      - target_label: __address__
        replacement: kubernetes.default.svc:443
      - source_labels: [__meta_kubernetes_node_name]
        regex: (.+)
        target_label: __metrics_path__
        replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor

    - job_name: 'kubernetes-service-endpoints'
      kubernetes_sd_configs:
      - role: endpoints
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
        action: replace
        target_label: __scheme__
        regex: (https?)
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
        action: replace
        target_label: __address__
        regex: ([^:]+)(?::\\d+)?;(\\d+)
        replacement: $1:$2
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        action: replace
        target_label: kubernetes_name

    - job_name: 'kubernetes-services'
      kubernetes_sd_configs:
      - role: service
      metrics_path: /probe
      params:
        module: [http_2xx]
      relabel_configs:
      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
        action: keep
        regex: true
      - source_labels: [__address__]
        target_label: __param_target
      - target_label: __address__
        replacement: blackbox-exporter.example.com:9115
      - source_labels: [__param_target]
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_service_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_service_name]
        target_label: kubernetes_name

    - job_name: 'kubernetes-ingresses'
      kubernetes_sd_configs:
      - role: ingress
      relabel_configs:
      - source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
        regex: (.+);(.+);(.+)
        replacement: ${1}://${2}${3}
        target_label: __param_target
      - target_label: __address__
        replacement: blackbox-exporter.example.com:9115
      - source_labels: [__param_target]
        target_label: instance
      - action: labelmap
        regex: __meta_kubernetes_ingress_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_ingress_name]
        target_label: kubernetes_name

    - job_name: 'kubernetes-pods'
      kubernetes_sd_configs:
      - role: pod
      relabel_configs:
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
        action: keep
        regex: true
      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
        action: replace
        target_label: __metrics_path__
        regex: (.+)
      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
        action: replace
        regex: ([^:]+)(?::\\d+)?;(\\d+)
        replacement: $1:$2
        target_label: __address__
      - action: labelmap
        regex: __meta_kubernetes_pod_label_(.+)
      - source_labels: [__meta_kubernetes_namespace]
        action: replace
        target_label: kubernetes_namespace
      - source_labels: [__meta_kubernetes_pod_name]
        action: replace
        target_label: kubernetes_pod_name

prometheus.deploy.yml

---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    name: prometheus-deployment
  name: prometheus
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      containers:
      - image: prom/prometheus:v2.0.0
        name: prometheus
        command:
        - "/bin/prometheus"
        args:
        - "--config.file=/etc/prometheus/prometheus.yml"
        - "--storage.tsdb.path=/prometheus"
        - "--storage.tsdb.retention=24h"
        ports:
        - containerPort: 9090
          protocol: TCP
        volumeMounts:
        - mountPath: "/prometheus"
          name: data
        - mountPath: "/etc/prometheus"
          name: config-volume
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
          limits:
            cpu: 500m
            memory: 2500Mi
      serviceAccountName: prometheus    
      volumes:
      - name: data
        emptyDir: {}
      - name: config-volume
        configMap:
          name: prometheus-config   

prometheus.svc.yml

---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: prometheus
  name: prometheus
  namespace: kube-system
spec:
  type: NodePort
  ports:
  - port: 9090
    targetPort: 9090
    nodePort: 30003
  selector:
    app: prometheus

第二步部署 Grafana

grafana-deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana-core
  namespace: kube-system
  labels:
    app: grafana
    component: core
spec:
  replicas: 1
  selector:
    matchLabels:
      selector:
    matchLabels:
      app: grafana
      component: core
  template:
    metadata:
      labels:
        app: grafana
        component: core
    spec:
      containers:
      - image: grafana/grafana:4.2.0
        name: grafana-core
        imagePullPolicy: IfNotPresent
        # env:
        resources:
          # keep request = limit to keep this container in guaranteed class
          limits:
            cpu: 100m
            memory: 100Mi
          requests:
            cpu: 100m
            memory: 100Mi
        env:
          # The following env variables set up basic auth twith the default admin user and admin password.
          - name: GF_AUTH_BASIC_ENABLED
            value: "true"
          - name: GF_AUTH_ANONYMOUS_ENABLED
            value: "false"
          # - name: GF_AUTH_ANONYMOUS_ORG_ROLE
          #   value: Admin
          # does not really work, because of template variables in exported dashboards:
          # - name: GF_DASHBOARDS_JSON_ENABLED
          #   value: "true"
        readinessProbe:
          httpGet:
            path: /login
            port: 3000
          # initialDelaySeconds: 30
          # timeoutSeconds: 1
        volumeMounts:
        - name: grafana-persistent-storage
          mountPath: /var
      volumes:
      - name: grafana-persistent-storage
        emptyDir: {}

grafana-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: grafana
  namespace: kube-system
  labels:
    app: grafana
    component: core
spec:
  type: NodePort
  ports:
    - port: 3000
  selector:
    app: grafana
    component: core

grafana-ing.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
   name: grafana
   namespace: kube-system
spec:
   rules:
   - host: k8s.grafana
     http:
       paths:
       - path: /
         backend:
          serviceName: grafana
          servicePort: 3000

[root@master pgmonitor]# cd grafana/
[root@master grafana]# ls
grafana-deploy.yaml  grafana-ing.yaml  grafana-svc.yaml
[root@master grafana]# vim grafana-deploy.yaml
[root@master grafana]# kubectl create -f grafana-deploy.yaml
deployment.apps/grafana-core created
[root@master grafana]# kubectl create -f grafana-svc.yaml
service/grafana created
[root@master grafana]# kubectl create -f grafana-ing.yaml
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
ingress.extensions/grafana created
[root@master grafana]# kubectl get pods -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE
coredns-7f89b7bc75-jx925         1/1     Running   3          7d19h
coredns-7f89b7bc75-pml7q         1/1     Running   3          7d19h
etcd-master                      1/1     Running   9          7d19h
grafana-core-6d6fb7566-fzmfz     1/1     Running   0          13m
kube-apiserver-master            1/1     Running   9          7d19h
kube-controller-manager-master   1/1     Running   0          5d
kube-flannel-ds-7kmgr            1/1     Running   3          7d19h
kube-flannel-ds-hmqcw            1/1     Running   4          7d19h
kube-flannel-ds-mql8h            1/1     Running   11         7d19h
kube-flannel-ds-x565h            1/1     Running   3          7d19h
kube-proxy-cgh25                 1/1     Running   3          7d19h
kube-proxy-lt5d7                 1/1     Running   5          7d19h
kube-proxy-nlqrt                 1/1     Running   3          7d19h
kube-proxy-vw255                 1/1     Running   10         7d19h
kube-scheduler-master            1/1     Running   0          5d
node-exporter-5cdp9              1/1     Running   0          38m
node-exporter-ggnxc              1/1     Running   0          38m
node-exporter-tv8ld              1/1     Running   0          38m
prometheus-68546b8d9-zwd4h       1/1     Running   0          27m
[root@master grafana]# 

第三步 打开Granfana,配置数据源,导入显示模板

[root@master grafana]# kubectl get svc -n kube-system
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
grafana         NodePort    10.100.244.146   <none>        3000:30181/TCP           17m
kube-dns        ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   7d19h
node-exporter   NodePort    10.99.111.152    <none>        9100:31672/TCP           43m
prometheus      NodePort    10.102.170.71    <none>        9090:30003/TCP           31m
[root@master grafana]# kubectl get svc -n kube-system -o wide
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE     SELECTOR
grafana         NodePort    10.100.244.146   <none>        3000:30181/TCP           22m     app=grafana,component=core
kube-dns        ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   7d20h   k8s-app=kube-dns
node-exporter   NodePort    10.99.111.152    <none>        9100:31672/TCP           48m     k8s-app=node-exporter
prometheus      NodePort    10.102.170.71    <none>        9090:30003/TCP           36m     app=prometheus

通过查看端口号访问:

默认的用户名和密码都是admin

image-20210613164150074

image-20210613164010159

配置数据源,使用prometheus

image-20210613164915304

image-20210613165309102

设置显示数据的模板 315固定的值

image-20210613165643350

image-20210613165617626

image-20210613165709875

image-20210613165818366

image-20210613165930643

删除yaml文件的命令(这两个命令都可以删除)

kubectl delete -f xxx.yaml

rm xxx.yaml

image-20210613170547372

image-20210613171023465

master节点的操作

1.部署keepalived

2.部署haproxy

3.初始化操作

4.安装docker ,网络插件

node节点的操作

加入到集群中

安装docker

网络插件

# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0  # 临时

# 关闭swap
swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久

# 根据规划设置主机名
hostnamectl set-hostname <hostname>

# 在master添加hosts(这个在master1和master2里面都得创建)
cat >> /etc/hosts << EOF
192.168.44.158    master.k8s.io   k8s-vip
阿里云公网ip   master01.k8s.io master1
阿里云公网ip   master02.k8s.io master2
阿里云公网ip    node01.k8s.io   node1
EOF

# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6t

以上是关于第156天学习打卡(Kubernetes 搭建监控平台 高可用集群部署 )的主要内容,如果未能解决你的问题,请参考以下文章

第147天学习打卡(Kubernetes 部署)

第152天学习打卡(Kubernetes 集群安全机制)

第149天学习打卡(Kubernetes 部署nginx 部署Dashboard)

第151天学习打卡(Kubernetes 集群YAML文件详解 Pod Controller)

第153天学习打卡(Kubernetes Ingress Helm)

第154天学习打卡(Kubernetes 使用Helm快速部署应用, 如何自己创建Chart)