filebeat读取nginx日志并写入kafka
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了filebeat读取nginx日志并写入kafka相关的知识,希望对你有一定的参考价值。
filebeat写入kafka的配置:
filebeat.inputs:
- type: log
paths:
- /tmp/access.log
tags: ["nginx-test"]
fields:
type: "nginx-test"
log_topic: "nginxmessages"
fields_under_root: true
processors:
- drop_fields:
fields: ["beat","input","source","offset"]
name: 10.10.5.119
output.kafka:
enabled: true
hosts: ["10.78.1.85:9092","10.78.1.87:9092","10.78.1.71:9092"]
topic: "%{[log_topic]}"
partition.round_robin:
reachable_only: true
worker: 2
required_acks: 1
compression: gzip
max_message_bytes: 10000000
logstash从kafka中读取的配置:
input {
kafka {
bootstrap_servers => "10.78.1.85:9092,10.78.1.87:9092,10.78.1.71:9092"
topics => ["nginxmessages"]
codec => "json"
}
}
以上是关于filebeat读取nginx日志并写入kafka的主要内容,如果未能解决你的问题,请参考以下文章
通过kafka和filebeat收集日志 再保存到clickhouse 最后通过grafana展现
CentOS6.9安装Filebeat监控Nginx的访问日志发送到Kafka