filebeat采集k8s日志 - 软链接(自定义docker目录使用此方法)

Posted 杜林晓

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了filebeat采集k8s日志 - 软链接(自定义docker目录使用此方法)相关的知识,希望对你有一定的参考价值。

文章目录

1、安装

1、下载filebeat

https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.4.0-linux-x86_64.tar.gz

2、上传服务器,解压

tar -zxvf filebeat-8.4.0-linux-x86_64.tar.gz
cd filebeat-8.4.0-linux-x86_64

2、配置

日志源使用docker软链接方式

  • 开启软链接 symlinks: true
  • 软链接路径 paths: /var/log/containers/*.log

修改filebeat.yml

# ============================== Filebeat inputs ===============================
filebeat.inputs:
- type: container
  symlinks: true
  containers.ids:
  - "*"
  id: my-filestream-id

  enabled: true
  paths:
    - /var/log/containers/*.log
    #- /data/docker/containers/*/*-json.log
  fields:
    cluster: cluster-dev
    topic: kafka_log_18603
  fields_under_root: true
  tail_files: true
  
  json.keys_under_root: true
  
# ============================== Filebeat modules ==============================
filebeat.config.modules:
  path: $path.config/modules.d/*.yml
  reload.enabled: false
# -----------------------------  kafka  ---------------------------------------
output.kafka:
  hosts: ["134.64.15.155:9092"]
  topic: kafka_log
  partition.round_robin:
    reachable_only: true
# ================================= Processors =================================
processors:
  - add_host_metadata: ~
  - add_docker_metadata: ~ 
  - add_kubernetes_metadata: ~
# ================================== Logging ===================================

logging.level: debug

3、执行

./filebeat -e -c filebeat.yml

4、kafka消息格式

日志路径格式:podName_namespace_container_name-container_id


    "@timestamp": "2022-12-22T09:56:01.658Z",
    "@metadata": 
        "beat": "filebeat",
        "type": "_doc",
        "version": "8.4.0"
    ,
    "host": 
        "ip": [
            "134.64.15.155",
            "fe80::da3c:2b52:e460:abb",
            "fe80::7fdf:e64e:8e64:9a0d",
            "fe80::7689:6ff9:71a3:df3b",
            "172.17.0.1",
            "fe80::42:84ff:fe2d:83d3",
            "fe80::c491:23ff:fe46:38e3"
        ],
        "mac": [
            "00:50:56:93:37:ca",
            "02:42:84:2d:83:d3",
            "c6:91:23:46:38:e3"
        ],
        "hostname": "ceph-admin",
        "architecture": "x86_64",
        "name": "ceph-admin",
        "os": 
            "kernel": "3.10.0-957.el7.x86_64",
            "codename": "Core",
            "type": "linux",
            "platform": "centos",
            "version": "7 (Core)",
            "family": "redhat",
            "name": "CentOS Linux"
        ,
        "id": "06ae05ec30744b22be02552a86fa12ef",
        "containerized": false
    ,
    "stream": "stderr",
    "message": "I1222 09:56:01.657908       1 client.go:360] parsed scheme: \\"passthrough\\"",
    "cluster": "cluster-dev",
    "ecs": 
        "version": "8.0.0"
    ,
    "log": 
        "offset": 27399663,
        "file": 
            "path": "/var/log/containers/kube-apiserver-ceph-admin_kube-system_kube-apiserver-c18d1047a8c212dee3388fab593439a76e105ced40da81b080cb384522fa7d57.log"
        
    ,
    "input": 
        "type": "container"
    ,
    "topic": "kafka_log",
    "agent": 
        "name": "ceph-admin",
        "type": "filebeat",
        "version": "8.4.0",
        "ephemeral_id": "6f278819-f3ee-4391-a7ba-3ccb05d19649",
        "id": "e9cc4627-57a6-44f9-ba7f-11816dc977a1"
    

5、集群日志目录

以上是关于filebeat采集k8s日志 - 软链接(自定义docker目录使用此方法)的主要内容,如果未能解决你的问题,请参考以下文章

多台NGINX服务器 ELK部署Filebeat的fields字段+Logstash的if 字段 采集多个日志文件写入elasticsearch 7.6.2 变更为不同索引名 自定义索引 index

k8s集群日志收集

集中式日志分析平台 - ELK Stack - Filebeat 的注意事项

使用Filebeat采集日志结合logstash过滤出你想要的日志

filebeat采集多个目录日志

ETL工具之日志采集filebeat+logstash