filebeat日志采集
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了filebeat日志采集相关的知识,希望对你有一定的参考价值。
架构一:
filebeat -> logstash1 -> redis -> logstash2 -> elastash(集群) -> kibana
这里就不写安装程序的步骤了相信大家都没有难度:
(软件安装可自行设计)
230,安装filebeat, logstash1 ,elastash
232,安装logstash2, redis, elastash ,kibana
注意:filebeat文件很注重文件格式
1,配置filebeat文件:
[[email protected] filebeat]# cat /etc/filebeat/filebeat.yml
filebeat:
prospectors:
# - #每个日志文件的开始
# paths: #定义路径
# - /var/www/logs/access.log #绝对路径
# input_type: log #日志类型为log
# document_type: api4-nginx-accesslog # 此名称要与logstash定义的名称相对应,logstash要使用此名称做type判断使用
-
paths:
- /opt/apps/huhu/logs/ase.log
input_type: log
document_type: "ase-ase-log"
encoding: utf-8
tail_files: true #每次最后一行
multiline.pattern: ‘^\[‘ #分割符
multiline.negate: true
multiline.match: after #最后合并
#tags: ["ase-ase"]
-
paths: #收集json格式日志
- /var/log/nginx/access.log
input_type: log
document_type: "nginx-access-log"
tail_files: true
json.keys_under_root: true
json.overwrite_keys: true
registry_file: /var/lib/filebeat/registry
output: #输出到230
logstash:
hosts: ["192.168.0.230:5044"]
shipper:
logging:
to_files: true
files:
path: /tmp/mybeat
2.配置230:logstash-->input-redis
[[email protected] conf.d]# pwd
/etc/logstash/conf.d
[[email protected] conf.d]# cat nginx-ase-input.conf
input {
beats {
port => 5044
codec => "json"
}}
output {
if [type] == "nginx-access-log" {
redis { #nginx日志写到redis信息
data_type => "list"
key => "nginx-accesslog"
host => "192.168.0.232"
port => "6379"
db => "4"
password => "123456"
}}
if [type] == "ase-ase-log" {
redis { #写到els日志写到redis信息
data_type => "list"
key => "ase-log"
host => "192.168.0.232"
port => "6379"
db => "4"
password => "123456"
}}
}
3.redis写到elstach里,232服务器配置:logstash-->output-->resid->elstash
[[email protected] conf.d]# pwd
/etc/logstash/conf.d
[[email protected] conf.d]# cat nginx-ase-output.conf
input {
redis {
type => "nginx-access-log"
data_type => "list"
key => "nginx-accesslog"
host => "192.168.0.232"
port => "6379"
db => "4"
password => "123456"
codec => "json"
}
redis {
type => "ase-ase-log"
data_type => "list"
key => "ase-log"
host => "192.168.0.232"
port => "6379"
db => "4"
password => "123456"
}
}
output {
if [type] == "nginx-access-log" {
elasticsearch {
hosts => ["192.168.0.232:9200"]
index => "nginx-accesslog-%{+YYYY.MM.dd}"
}}
if [type] == "ase-ase-log" {
elasticsearch {
hosts => ["192.168.0.232:9200"]
index => "ase-log-%{+YYYY.MM.dd}"
}}
}
4,在232上配置elsaticsearch--->kibana
在kibana上找到ELS的索引即可。
架构二:
filebeat -> redis -> logstash --> elsasctic --> kibana #缺点filebeat写进redis有限制,占时还没找到多个写入。
1.feilebeat配置:
[[email protected] yes_yml]# cat filebeat.yml
filebeat:
prospectors:
# - #每个日志文件的开始
# paths: #定义路径
# - /var/www/logs/access.log #绝对路径
# input_type: log #日志类型为log
# document_type: api4-nginx-accesslog # 此名称要与logstash定义的名称相对应,logstash要使用此名称做type判断使用
-
paths:
- /opt/apps/qpq/logs/qpq.log
input_type: log
document_type: "qpq-qpq-log"
encoding: utf-8
tail_files: true
multiline.pattern: ‘^\[‘
multiline.negate: true
multiline.match: after
#tags: ["qpq-qpq-log"]
registry_file: /var/lib/filebeat/registry
output:
redis:
host: "192.168.0.232"
port: 6379
db: 3
password: "123456"
timeout: 5
reconnect_interval: 1
index: "pqp-pqp-log"
shipper:
logging:
to_files: true
files:
path: /tmp/mybeat
2.由232redis-->els--kibana
[[email protected] yes_yml]# cat systemlog.conf
input {
redis {
type => "qpq-qpq-log"
data_type => "list"
key => "qpq-pqp-log"
host => "192.168.0.232"
port => "6379"
db => "3"
password => "123456"
}}
output {
if [type] == "qpq-qpq-log"{
elasticsearch {
hosts => ["192.168.0.232:9200"]
index => "qpq-qpq-log-%{+YYYY.MM.dd}"
}
}
}
3.在232上配置elsaticsearch--->kibana
在kibana上找到ELS的索引即可
以上是关于filebeat日志采集的主要内容,如果未能解决你的问题,请参考以下文章
filebeat采集nginx日志,业务日志,阿里云sms,slb日志