Docker 安装 filebeat 读取日志 输出到redis或者es

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Docker 安装 filebeat 读取日志 输出到redis或者es相关的知识,希望对你有一定的参考价值。

参考技术A 1.docker 安装自行百度

2.filebeat:7.16.2

 docker run -d --name filebeat -v /Documents/filebeat/logs:/var/log/filebeat -v /Documents/filebeat/filebeat.yml:/usr/share/filebeat/filebeat.yml -v  .../logs/business/error.log:/var/log/error.log --privileged=true elastic/filebeat:7.16.2

斜杠这部分非常重要,如果没有映射日志,filebeat 读取不到日志文件,会导致filebeat 的log 一直在报最近30秒没有内容更新

filebeat.yml 配置

filebeat.inputs:

- type: log

  enabled: true

  harvester_buffer_size: 409600

  ignore_older: 5h

  paths:

    - /var/log/error.log /*这个地方也很重要不要以实际文件目录,而是以映射到容器内的目录/或者文件*/

  fields:

    log_source: business

output.redis:

  hosts: ["172.17.0.2:6379"]

  password: ""

  key: "filebeat"

  db: 1

  timeout: 5

output.elasticsearch:

  hosts: ["172.17.0.4:9200"]

  username: "elastic"

  password: "123456"

  indices:

    - index: "filebeat-%+yyyy.MM.dd"

解决问题:filebeat 安装后 日志数据没有同步redis 或者es,但是filebeat 日志也没有报错,基本上就是应为filebeat 没有读取到日志文件,导致没有数据上报

filebeat采集k8s日志 - 软链接(自定义docker目录使用此方法)

文章目录

1、安装

1、下载filebeat

https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.4.0-linux-x86_64.tar.gz

2、上传服务器,解压

tar -zxvf filebeat-8.4.0-linux-x86_64.tar.gz
cd filebeat-8.4.0-linux-x86_64

2、配置

日志源使用docker软链接方式

  • 开启软链接 symlinks: true
  • 软链接路径 paths: /var/log/containers/*.log

修改filebeat.yml

# ============================== Filebeat inputs ===============================
filebeat.inputs:
- type: container
  symlinks: true
  containers.ids:
  - "*"
  id: my-filestream-id

  enabled: true
  paths:
    - /var/log/containers/*.log
    #- /data/docker/containers/*/*-json.log
  fields:
    cluster: cluster-dev
    topic: kafka_log_18603
  fields_under_root: true
  tail_files: true
  
  json.keys_under_root: true
  
# ============================== Filebeat modules ==============================
filebeat.config.modules:
  path: $path.config/modules.d/*.yml
  reload.enabled: false
# -----------------------------  kafka  ---------------------------------------
output.kafka:
  hosts: ["134.64.15.155:9092"]
  topic: kafka_log
  partition.round_robin:
    reachable_only: true
# ================================= Processors =================================
processors:
  - add_host_metadata: ~
  - add_docker_metadata: ~ 
  - add_kubernetes_metadata: ~
# ================================== Logging ===================================

logging.level: debug

3、执行

./filebeat -e -c filebeat.yml

4、kafka消息格式

日志路径格式:podName_namespace_container_name-container_id


    "@timestamp": "2022-12-22T09:56:01.658Z",
    "@metadata": 
        "beat": "filebeat",
        "type": "_doc",
        "version": "8.4.0"
    ,
    "host": 
        "ip": [
            "134.64.15.155",
            "fe80::da3c:2b52:e460:abb",
            "fe80::7fdf:e64e:8e64:9a0d",
            "fe80::7689:6ff9:71a3:df3b",
            "172.17.0.1",
            "fe80::42:84ff:fe2d:83d3",
            "fe80::c491:23ff:fe46:38e3"
        ],
        "mac": [
            "00:50:56:93:37:ca",
            "02:42:84:2d:83:d3",
            "c6:91:23:46:38:e3"
        ],
        "hostname": "ceph-admin",
        "architecture": "x86_64",
        "name": "ceph-admin",
        "os": 
            "kernel": "3.10.0-957.el7.x86_64",
            "codename": "Core",
            "type": "linux",
            "platform": "centos",
            "version": "7 (Core)",
            "family": "redhat",
            "name": "CentOS Linux"
        ,
        "id": "06ae05ec30744b22be02552a86fa12ef",
        "containerized": false
    ,
    "stream": "stderr",
    "message": "I1222 09:56:01.657908       1 client.go:360] parsed scheme: \\"passthrough\\"",
    "cluster": "cluster-dev",
    "ecs": 
        "version": "8.0.0"
    ,
    "log": 
        "offset": 27399663,
        "file": 
            "path": "/var/log/containers/kube-apiserver-ceph-admin_kube-system_kube-apiserver-c18d1047a8c212dee3388fab593439a76e105ced40da81b080cb384522fa7d57.log"
        
    ,
    "input": 
        "type": "container"
    ,
    "topic": "kafka_log",
    "agent": 
        "name": "ceph-admin",
        "type": "filebeat",
        "version": "8.4.0",
        "ephemeral_id": "6f278819-f3ee-4391-a7ba-3ccb05d19649",
        "id": "e9cc4627-57a6-44f9-ba7f-11816dc977a1"
    

5、集群日志目录

以上是关于Docker 安装 filebeat 读取日志 输出到redis或者es的主要内容,如果未能解决你的问题,请参考以下文章

通过kafka和filebeat收集日志 再保存到clickhouse 最后通过grafana展现

filebeat+elasticsearch+logstash+kibana收集系统日志(docker)

docker容器日志收集方案(方案一 filebeat+本地日志收集)

ELK学习实验018:filebeat收集docker日志

elk-filebeat收集docker容器日志

filebeat采集k8s日志 - 软链接(自定义docker目录使用此方法)