filebeat+redis+logstash+es+kibana部署
Posted jiangtang
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了filebeat+redis+logstash+es+kibana部署相关的知识,希望对你有一定的参考价值。
环境: ubuntu 16.04.2
filbeat 7.4.2
logstash 7.4.2
elasticsearch 7.4.2
kibana 7.4.2
redis 4.0.11(docker)
ip地址 | 主机名 | 部署 |
192.168.0.11 | es1 | elasticsearch filebeat |
192.168.0.12 | es2 | elasticsearch redis |
192.168.0.13 | es3 | elasticsearch |
192.168.0.14 | kibana | kibana logstash |
下载链接:
https://mirrors.tuna.tsinghua.edu.cn/elasticstack/7.x/apt/pool/main/k/kibana/kibana-7.4.2-amd64.deb
https://mirrors.tuna.tsinghua.edu.cn/elasticstack/7.x/apt/pool/main/e/elasticsearch/elasticsearch-7.4.2-amd64.deb
https://mirrors.tuna.tsinghua.edu.cn/elasticstack/7.x/apt/pool/main/l/logstash/logstash-7.4.2.deb
https://mirrors.tuna.tsinghua.edu.cn/elasticstack/7.x/apt/pool/main/f/filebeat/filebeat-7.4.2-amd64.deb
1.安装elasticsearch(分别在es的三个节点上执行)
wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/7.x/apt/pool/main/e/elasticsearch/elasticsearch-7.4.2-amd64.deb
dpkg -i elasticsearch-7.4.2-amd64.deb
2.修改配置文件/etc/elasticsearch/elasticsearch.yml (三台主机都要改)
cluster.name: my-application
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.seed_hosts: ["192.168.0.11", "192.168.0.12","192.168.0.13"]
cluster.initial_master_nodes: ["es1", "es2"]
gateway.recover_after_nodes: 2
重启服务 systemctl restart elasticsearch
3.部署redis(在es2的主机上)
docker run -d --name=redis --network=host -v /usr/local/redis:/data redis:4.0.11
4.部署kibana(kibana主机上)
wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/7.x/apt/pool/main/k/kibana/kibana-7.4.2-amd64.deb
dpkg -i kibana-7.4.2-amd64.deb
修改配置文件/etc/kibana/kibana.yml
#监听地址
server.host: "0.0.0.0"
#es集群的地址
elasticsearch.hosts: ["http://192.168.0.12:9200","http://192.168.0.11:9200","http://192.168.0.13:9200"]
#默认语言为中文
i18n.locale: "zh-CN"
重启服务systemctl restart kibana
5.部署logstash
wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/7.x/apt/pool/main/l/logstash/logstash-7.4.2.deb
需要java环境支持(如果没有可以先apt install -y openjdk-8-jdk)
dpkg -i logstash-7.4.2.deb
添加配置文件/etc/logstash/conf.d/logstash.conf
input {
redis {
host => "192.168.0.12"
password => \'\'
port => "6379"
db => "0"
data_type => "list"
key => "applog"
}
}
filter {
#截取timestamp
grok {
match=>["message","%{TIMESTAMP_ISO8601:logdate}"]
}
#覆盖timestamp
date {
match => ["logdate", "yyyy-MM-dd HH:mm:ss,SSS"]
target => "@timestamp"
}
#删除无用字段
mutate {
remove_field => ["logdate","tags","beat"]
}
kv {
remove_char_key => "<>\\[\\]\\(\\)"
remove_char_value => "<>\\[\\]"
recursive => "false"
field_split => " ,{}\\[\\]\\(\\)"
}
}
output {
elasticsearch {
hosts => ["192.168.0.11:9200","192.168.0.12:9200","192.168.0.13:9200"]
index => "common-%{+YYYY.MM.dd}"
}
#stdout { codec => rubydebug }
}
重启服务systemctl restart logstash
6.部署filebeat
wget https://mirrors.tuna.tsinghua.edu.cn/elasticstack/7.x/apt/pool/main/f/filebeat/filebeat-7.4.2-amd64.deb
dpkg -i filebeat-7.4.2-amd64.deb
修改配置文件/etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
clean_removed: true
close_removed: true
scan_frequency: 20s
#clean_inactive: 3h
#close_timeout: 1h
#close_inactive: 1m
exclude_files: [\'.gz$\']
#ignore_older: 2h
paths:
- /opt/*log*/*.log
- /opt/*log*/*/*.log
multiline.pattern: \'^\\d{4}\\-\\d{2}\\-\\d{2}\'
multiline.negate: true
multiline.match: after
tags: ["applog"]
fields:
log_type: applog
logging.level: debug
logging.to_files: true
logging.files:
path: /var/log/filebeat
name: filebeat.log
keepfiles: 4
permissions: 0644
rotateeverybytes: 104857600
max_procs: 2
output.redis:
hosts: ["192.168.0.12:6379"]
key: applog
password: ""
db: 0
重启服务systemctl restart filebeat
测试是否ok
复制日志文件的/opt/log下
查看索引文件
创建kibana索引
至此全部ok
以上是关于filebeat+redis+logstash+es+kibana部署的主要内容,如果未能解决你的问题,请参考以下文章
filebeat+redis+logstash+es+kibana部署
ELK之生产日志收集构架(filebeat-logstash-redis-logstash-elasticsearch-kibana)
Linux??????ELK?????????????????????FIlebeat+Redis+Logstash+Elasticse