我在 kibana 仪表板上看不到日志

Posted

技术标签:

【中文标题】我在 kibana 仪表板上看不到日志【英文标题】:i am not able to see logs on kibana dashboard 【发布时间】:2021-02-15 10:01:44 【问题描述】:

我正在使用 ELK 堆栈(elasticsearch、logstash、kibana)在 Kubernetes 环境中进行日志处理和分析。为了捕获日志,我使用了 filebeat。

yaml下elasticsearch的服务账号、集群角色、集群角色绑定

apiVersion: v1
kind: ServiceAccount
metadata:
  name: elasticsearch
  namespace: kube-system
  labels:
    k8s-app: elasticsearch
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: elasticsearch
  labels:
    k8s-app: elasticsearch
rules:
- apiGroups:
  - ""
  resources:
  - "services"
  - "namespaces"
  - "endpoints"
  verbs:
  - "get"
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: kube-system
  name: elasticsearch
  labels:
    k8s-app: elasticsearch
subjects:
- kind: ServiceAccount
  name: elasticsearch
  namespace: kube-system
  apiGroup: ""
roleRef:
  kind: ClusterRole
  name: elasticsearch
  apiGroup: ""

elasticsearch 服务 yaml

apiVersion: v1
kind: Service
metadata:
  name: elasticsearch
  namespace: kube-system
  labels:
    k8s-app: elasticsearch
spec:
  ports:
  - port: 9200
    protocol: TCP
    targetPort: db
  selector:
    k8s-app: elasticsearch
  externalIPs:
  - 10.10.0.82

弹性搜索状态在下面设置yaml:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch
  namespace: kube-system
  labels:
    k8s-app: elasticsearch
spec:
  serviceName: elasticsearch
  replicas: 2
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      k8s-app: elasticsearch
  template:
    metadata:
      labels:
        k8s-app: elasticsearch
    spec:
      serviceAccountName: elasticsearch
      containers:
      - image: elasticsearch:6.8.4
        name: elasticsearch
        resources:
            limits:
              cpu: 1000m
              memory: "2Gi"
            requests:
              cpu: 100m
              memory: "1Gi"
        ports:
        - containerPort: 9200
          name: db
          protocol: TCP
        - containerPort: 9300
          name: transport
          protocol: TCP
        volumeMounts:
        - name: data
          mountPath: /data
        env:
        - name: "NAMESPACE"
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
      initContainers:
      - image: alpine:3.6
        command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
        name: elasticsearch-init
        securityContext:
          privileged: true
  volumeClaimTemplates:
  - metadata:
      name: data
      labels:
        k8s-app: elasticsearch
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi

pv & pvc0 yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: elklogs-pv0
  namespace: kube-system
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    server: 10.10.0.131
    path: /opt/data/vol/0
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: data-elasticsearch-0
  namespace: kube-system
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

pv_pvc1.yaml

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: elklogs-pv1
  namespace: kube-system
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    server: 10.10.0.131
    path: /opt/data/vol/1

---

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: data-elasticsearch-1
  namespace: kube-system
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

logstash_svc.yaml

kind: Service
apiVersion: v1
metadata:
  name: logstash-service
  namespace: kube-system
spec:
  selector:
    app: logstash
  ports:
  - protocol: TCP
    port: 5044
    targetPort: 5044
  externalIPs:
  - 10.10.0.82

logstash_config.yaml

kind: ConfigMap
metadata:
  name: logstash-configmap
  namespace: kube-system
data:
  logstash.yml: |
    http.host: "0.0.0.0"
    path.config: /usr/share/logstash/pipeline
  logstash.conf: |
    input 
      beats 
        port => 5044
      
    
    filter 
      grok 
          match =>  "message" => "%COMBINEDAPACHELOG" 
      
      date 
        match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
      
      geoip 
          source => "clientip"
        
      
      output 
        elasticsearch 
          hosts => ["http://10.10.0.82:9200"]
      
    

logstash 部署

apiVersion: apps/v1
kind: Deployment
metadata:
  name: logstash-deployment
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: logstash
  template:
    metadata:
      labels:
        app: logstash
    spec:
      containers:
      - name: logstash
        image: docker.elastic.co/logstash/logstash:6.3.0
        ports:
        - containerPort: 5044
        volumeMounts:
          - name: config-volume
            mountPath: /usr/share/logstash/config
          - name: logstash-pipeline-volume
            mountPath: /usr/share/logstash/pipeline
      volumes:
      - name: config-volume
        configMap:
          name: logstash-configmap
          items:
            - key: logstash.yml
              path: logstash.yml
      - name: logstash-pipeline-volume
        configMap:
          name: logstash-configmap
          items:
            - key: logstash.conf
              path: logstash.conf

filebeat.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: filebeat
  labels:
    k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
  resources:
  - namespaces
  - pods
  verbs:
  - get
  - watch
  - list
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: filebeat
subjects:
- kind: ServiceAccount
  name: filebeat
  namespace: kube-system
roleRef:
  kind: ClusterRole
  name: filebeat
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-config
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  filebeat.yml: |-
    filebeat.config:
      prospectors:
        # Mounted `filebeat-prospectors` configmap:
        path: $path.config/prospectors.d/*.yml
        # Reload prospectors configs as they change:
        reload.enabled: false
      modules:
        path: $path.config/modules.d/*.yml
        # Reload module configs as they change:
        reload.enabled: false
    output.logstash:
      hosts: ["http://10.10.0.82:5044"]
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: filebeat-prospectors
  namespace: kube-system
  labels:
    k8s-app: filebeat
data:
  kubernetes.yml: |-
    - type: docker
      containers.ids:
      - "*"
      processors:
        - add_kubernetes_metadata:
            in_cluster: true
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: filebeat
  namespace: kube-system
  labels:
    k8s-app: filebeat
spec:
  selector:
    matchLabels:
      k8s-app: filebeat
  template:
    metadata:
      labels:
        k8s-app: filebeat

    spec:
      serviceAccountName: filebeat
      terminationGracePeriodSeconds: 30
      containers:
      - name: filebeat
        image: docker.elastic.co/beats/filebeat:6.8.4
        args: [
          "-c", "/etc/filebeat.yml",
          "-e",
        ]
        securityContext:
          runAsUser: 0
        volumeMounts:
        - name: config
          mountPath: /etc/filebeat.yml
          readOnly: true
          subPath: filebeat.yml
        - name: prospectors
          mountPath: /usr/share/filebeat/prospectors.d
          readOnly: true
        - name: data
          mountPath: /usr/share/filebeat/data
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      volumes:
      - name: config
        configMap:
          defaultMode: 0600
          name: filebeat-config
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: prospectors
        configMap:
          defaultMode: 0600
          name: filebeat-prospectors
      - name: data
        emptyDir: 

kibana.yaml

kind: Deployment
metadata:
  name: kibana-logging
  namespace: kube-system
  labels:
    k8s-app: kibana-logging
spec:
  replicas: 3
  selector:
    matchLabels:
      k8s-app: kibana-logging
  template:
    metadata:
      labels:
        k8s-app: kibana-logging
    spec:
      containers:
      - name: kibana-logging
        image: docker.elastic.co/kibana/kibana-oss:6.8.4
        env:
          - name: ELASTICSEARCH_URL
            value: http://10.10.0.82:9200
        ports:
        - containerPort: 5601
          name: ui
          protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: kibana-logging
  namespace: kube-system
  labels:
    k8s-app: kibana-logging
    kubernetes.io/name: "Kibana"
spec:
  type: NodePort
  ports:
  - port: 5601
    protocol: TCP
    targetPort: ui
    nodePort: 32010
  selector:
    k8s-app: kibana-logging
kubectl get svc -n kube-system
elasticsearch      ClusterIP   10.43.50.63    10.10.0.82    9200/TCP                 31m
kibana-logging     NodePort    10.43.58.127   10.10.0.82    5601:32010/TCP           4m4s
kube-dns           ClusterIP   10.43.0.10     <none>        53/UDP,53/TCP,9153/TCP   23d
logstash-service   ClusterIP   10.43.130.36   10.10.0.82    5044/TCP                 30m

filebeat pod 日志:

2020-11-04T16:42:22.857Z        INFO    log/harvester.go:255    Harvester started for file: /var/lib/docker/containers/011d24d334bba573ffbb466b0f3f70ae5ddc986f233e683076eaae7394801203/011d24d334bba573ffbb466b0f3f70ae5ddc986f233e683076eaae7394801203-json.log
2020-11-04T16:42:22.983Z        INFO    pipeline/output.go:95   Connecting to backoff(async(tcp://logstash-service:9600))
2020-11-04T16:42:52.412Z        INFO    [monitoring]    log/log.go:144  Non-zero metrics in the last 30s        "monitoring": "metrics": "beat":"cpu":"system":"ticks":270,"time":"ms":271,"total":"ticks":740,"time":"ms":745,"value":740,"user":"ticks":470,"time":"ms":474,"handles":"limit":"hard":1048576,"soft":1048576,"open":97,"info":"ephemeral_id":"6584086a-eff4-46b5-9be0-93892dad9d97","uptime":"ms":30191,"memstats":"gc_next":36421840,"memory_alloc":32140904,"memory_total":55133048,"rss":65593344,"filebeat":"events":"active":4214,"added":4219,"done":5,"harvester":"open_files":89,"running":88,"started":88,"libbeat":"config":"module":"running":0,"reloads":2,"output":"type":"logstash","pipeline":"clients":2,"events":"active":4117,"filtered":88,"published":4116,"total":4205,"registrar":"states":"current":5,"update":5,"writes":"success":6,"total":6,"system":"cpu":"cores":8,"load":"1":1.9,"15":0.61,"5":0.9,"norm":"1":0.2375,"15":0.0763,"5":0.1125
2020-11-04T16:42:54.289Z        ERROR   pipeline/output.go:100  Failed to connect to backoff(async(tcp://logstash-service:5044)): dial tcp 10.43.145.162:5044: i/o timeout
2020-11-04T16:42:54.289Z        INFO    pipeline/output.go:93   Attempting to reconnect to backoff(async(tcp://logstash-service:5044)) with 1 reconnect attempt(s)
logstash pod logs :
[WARN ] 2020-11-04 15:45:04.648 [Ruby-0-Thread-4: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.1.1-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:232] elasticsearch - Attempted to resurrect connection to dead ES instance, but got an error. :url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch"

【问题讨论】:

你用什么地址看kiaban? 195.134.187.25:32010 有什么错误吗?豆荚是否启动并运行?你能分享kubectl get svc -n kube-system的输出吗? elasticsearch ClusterIP 10.43.50.63 10.10.0.82 9200/TCP 31m kibana-logging NodePort 10.43.58.127 10.10.0.82 5601:32010/TCP 4m4s kube-dns ClusterIP 10.43.0.10 &lt;none&gt; 53/UDP,53/TCP,9153/TCP 23d logstash-service ClusterIP 10.43.130.36 10.10.0.82 5044/TCP @MariuszK. @Behnam 请帮帮我。 【参考方案1】:

我从你的架构中了解到,你正在使用 Filebeat >> Logstash >> Elasticsearch >> Kibana强>

所以,在filebeat.yml 中,您选择了输出作为logstash。但是,您在filebeat.yml 中为logstash 输出提供了错误的端口。

应该是:

output.logstash:
  hosts: ['http://195.134.187.25:5044']

如您在logstash_config.yaml 中看到的,您已将 5044 作为节拍输入。因此,在filebeat.yml 中进行更改output.logstash

【讨论】:

感谢您的回复@Sourav Atta,但我已经在主机上进行了测试:['195.134.187.25:5044'] 但没有输出。其实我正在尝试 filebeat vcan 直接发送到弹性搜索 哦,好的。那么,在这种情况下,您是否需要将filebeat.yml中的[output.logstash]更改为[output.elasticsearch] 我的部署文件是否正确?..请检查@sourav-atta 理解起来有点混乱,请您尝试一下,如果您遇到任何错误,请在此处添加该错误。在运行 filebeat 时也要检查收割机,如果收割机没有运行,则不会将任何日志存储到 elasticsearch。 请帮帮我...我在上面添加了filebeat和logstash的日志,有错误。

以上是关于我在 kibana 仪表板上看不到日志的主要内容,如果未能解决你的问题,请参考以下文章

Filebeat 与Kibana仪表板(二十一)

在 Kibana 仪表板中创建单独部分的 Grok 模式

1分钟系列-在 Kibana 安装和使用 Nginx 的日志仪表盘

如何获取 Kibana 仪表板 ID?

日志分析系统ELK之Kibanaes的替代metricbeat

日志分析系统ELK之Kibanaes的替代metricbeat