k8s部署elk(es集群需提前安装好,版本6.8.2)

Posted 西风h

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了k8s部署elk(es集群需提前安装好,版本6.8.2)相关的知识,希望对你有一定的参考价值。

#前提:确保已部署好k8s集群基础环境!
注意:镜像地址要填写实际镜像仓库地址!(docker images |grep 镜像名),es集群地址填写实际地址!

部署kibana,提供存储日志展示ui界面

部署svc,使用NodePort对外暴露端口,可被外部访问。
vim hqs-kibana-svc.yaml (编写kibana svc yaml文件)

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: hqs-kibana
  name: hqs-kibana
  namespace: hqs
  selfLink: /api/v1/namespaces/hqs/services/hqs-kibana
spec:
  clusterIP: 10.4.146.238
  externalTrafficPolicy: Cluster
  ports:
  - name: http
    nodePort: 32115
    port: 5601
    protocol: TCP
    targetPort: 5601
  selector:
    apps.deployment: hqs-kibana
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}

kubectl create -f hqs-kibana-svc.yaml (创建对外暴露端口Service)
kubectl get svc -n hqs (查看名为hqs-kibana的svc是否创建成功)

部署kibana,类型deployment:

vim hqs-kibana-deployment.yaml

kind: Deployment
apiVersion: apps/v1
metadata:
  name: hqs-kibana
  namespace: hqs
  selfLink: /apis/apps/v1/namespaces/hqs/deployments/hqs-kibana
  labels:
    name: hqs-kibana
spec:
  replicas: 1
  selector:
    matchLabels:
      apps.deployment: hqs-kibana
  template:
    metadata:
      creationTimestamp: null
      labels:
        apps.deployment: hqs-kibana
    spec:
      containers:
        - name: hqs-kibana
          image: elastic/kibana:6.8.2
          env:
          - name: "ELASTICSEARCH_URL"
            value: "http://10.66.0.126:9200"
          - name: "ELASTICSEARCH_USERNAME"
            value: "elastic"
          - name: "ELASTICSEARCH_PASSWORD"
            value: "7ujm<KI*"  
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
          volumeMounts:
          - name: host-time
            mountPath: /etc/localtime
      volumes:
      - name: host-time
        hostPath:
          path: /etc/localtime
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      securityContext: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600

kubectl create -f hqs-kibana-deployment.yaml (部署pod)
kubectl get pod -nhqs (查看hqs-kibana-xxxxx pod状态,为Running表示正常)

部署logstash,用于日志收集,集中、转换和存储数据

vim hqs-logstash-deployment.yaml (部署副本为2,会创建2个pod,随机部署到集群中)

kind: ConfigMap
apiVersion: v1
metadata:
  name: hqs-logstash-config
  namespace: hqs
  selfLink: /api/v1/namespaces/hqs/configmaps/hqs-logstash-config
data:
  logstash.conf: |-
    input{
      beats {
        port => 5044
      }
    }

    filter {  # 配置过滤器
      grok {
        match => { "message" => "\\[%{TIMESTAMP_ISO8601:logTime}\\]\\[%{DATA:fileName}:%{NUMBER:line}\\]\\[%{NUMBER:process}\\,%{NUMBER:thread}\\] %{LOGLEVEL:loglevel}%{GREEDYDATA:message}"}  # 定义日志的输出格式
      }
    }

    output {
     stdout { codec => rubydebug }
       elasticsearch {
         hosts => ["http://10.66.0.126:9200","http://10.66.0.27:9200","http://10.66.0.32:9200"]
         index => "%{[fields][indexname]}-%{+YYYY.MM.dd}"
         user => "elastic"
         password => "changeme"
        }
    }
---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: hqs-logstash
  namespace: hqs
  selfLink: /apis/apps/v1/namespaces/hqs/deployments/hqs-logstash
  labels:
    name: hqs-logstash
spec:
  replicas: 2
  selector:
    matchLabels:
      apps.deployment: hqs-logstash
  template:
    metadata:
      creationTimestamp: null
      labels:
        apps.deployment: hqs-logstash
    spec:
      volumes:
        - name: hqs-logstash-config
          configMap:
            name: hqs-logstash-config
            items:
              - key: logstash.conf
                path: logstash.conf
            defaultMode: 420
      containers:
        - name: hqs-logstash
          image: elastic/logstash:6.8.2
          envFrom:
            - configMapRef:
                name: hqs-logstash-config
          command:
          - logstash
          - '-f'
          - '/etc/logstash.conf'      
          resources: 
            limits:
              cpu: 1000m
              memory: 2048Mi
            requests:
              cpu: 100m
              memory: 512Mi
          volumeMounts:
            - name: hqs-logstash-config
              readOnly: true
              mountPath: /etc/logstash.conf
              subPath: logstash.conf
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: IfNotPresent
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      securityContext: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600

kubectl create -f hqs-logstash-deployment.yaml
kubectl get pod -nhqs |grep hqs-logstash (查看是否成功部署pod,正常有两个pod)
vim hqs-logstash-svc.yaml

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: hqs-logstash
  name: hqs-logstash
  namespace: hqs
  selfLink: /api/v1/namespaces/hqs/services/hqs-logstash
spec:
  ports:
  - name: tcp
    port: 5044
    protocol: TCP
    targetPort: 5044
  selector:
    apps.deployment: hqs-logstash
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

kubectl create -f hqs-logstash-svc.yaml

部署filebeat,在各个应用节点采集日志。(采用DaemonSet)

vim hqs-filebeat-deployment.yaml

---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: hqs
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.inputs:
    - type: log
      enabled: true
      paths:
        - /data/release/hqs-toc/hqs-toc*.log
      fields:
        indexname: hqs-toc
    - type: log
      enabled: true
      paths:
        - /data/release/hqs-eureka/hqs-eureka*.log
      fields:
        indexname: hqs-eureka
    - type: log
      enabled: true
      paths:
        - /data/release/hqs-rights/hqs-rights*.log
      fields:
        indexname: hqs-rights
      filebeat.config.modules:
      path: ${path.config}/modules.d/*.yml
      reload.enabled: false
    setup.template.settings:
      index.number_of_shards: 3
    setup.kibana:
    output.logstash:
      hosts: ["hqs-logstash:5044"]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: hqs-filebeat
  namespace: hqs
  selfLink: /apis/apps/v1/namespaces/hqs/daemonsets/hqs-filebeat
  labels:
    name: hqs-filebeat
spec:
  selector:
    matchLabels:
      apps.daemonset: hqs-filebeat
  template:
    metadata:
      labels:
        apps.daemonset: hqs-filebeat
    spec:
      containers:
      - name: hqs-filebeat
        image: elastic/filebeat:6.8.2
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        resources:
          limits:
            cpu: 0m
            memory: 0Mi
          requests:
            cpu: 0m
            memory: 0Mi
        securityContext:
          runAsUser: 0    
        volumeMounts:
         - name: eureka-logs
           mountPath: /data/release/hqs-eureka/
         - name: toc-logs
           mountPath: /data/release/hqs-toc/ 
         - name: rights-logs
           mountPath: /data/release/hqs-rights/
         - name: config
           mountPath: /etc/filebeat.yml
           readOnly: true
           subPath: filebeat.yml
      volumes:
      - name: eureka-logs
        hostPath:
          path: /data/release/hqs-eureka/
      - name: toc-logs
        hostPath:
          path: /data/release/hqs-toc/
      - name: rights-logs
        hostPath:
          path: /data/release/hqs-rights/
      - name: config
        configMap:
          name: filebeat-config

kubectl create -f hqs-filebeat-deployment.yaml
kubectl get pod -nhqs |grep hqs-filebeat (集群中每个node节点都会部署)
访问测试 http://nodeip:32115

以上是关于k8s部署elk(es集群需提前安装好,版本6.8.2)的主要内容,如果未能解决你的问题,请参考以下文章

在k8s集群部署ELK

Ceph

ELK 中的elasticsearch 集群的部署

centos7搭建ELK Cluster集群日志分析平台:简单测试

在 K8S 上部署 ELK 7.14 集群实现采集容器日志

使用Docker快速部署ES单机或ES集群