ELK 做日志分析(filebeat+logstash+elasticsearch)配置
Posted lasse1897
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了ELK 做日志分析(filebeat+logstash+elasticsearch)配置相关的知识,希望对你有一定的参考价值。
利用 Filebeat去读取日志发送到 Logstash ,再由 Logstash 处理后发送给 Elasticsearch 。
一、Filebeat
项目日志文件:
利用 Filebeat 去读取文件,paths 下面配置路径地址,Filebeat 会自动去读取 /data/share/business_log/TA-*/debug.log 文件
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
- type: log
# Change to true to enable this prospector configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
#- /usr/local/server/openresty/nginx/logs/*.log
- /data/share/business_log/TA-*/debug.log
#- c:programdataelasticsearchlogs*
filebeat 对于多行日志的处理
multiline:
pattern: ‘^[0-2][0-9]:[0-5][0-9]:[0-5][0-9]‘
negate: true
match: after
上面配置的意思是:不以时间格式开头的行都合并到上一行的末尾(正则写的不好,忽略忽略)
pattern:正则表达式
negate:true 或 false;默认是false,匹配pattern的行合并到上一行;true,不匹配pattern的行合并到上一行
match:after 或 before,合并到上一行的末尾或开头
还有更多两个配置,默认也是注释的,没特殊要求可以不管它
max_lines: 500
timeout: 5s
max_lines:合并最大行,默认500
timeout:一次合并事件的超时时间,默认5s,防止合并消耗太多时间甚至卡死
nginx日志文件
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
- type: log
# Change to true to enable this prospector configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /usr/local/server/openresty/nginx/logs/access.log
- /usr/local/server/openresty/nginx/logs/error.log
#- /data/share/business_log/TA-*/debug.log
#- c:programdataelasticsearchlogs*
输出配置
注释掉 Elasticsearch 下面的配置项,并配置 Logstash 下面的配置,会将 Filebeat 读取到的日志文件发送到 hosts 里面配置的 Logstash 服务器上面去
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["172.18.1.152:5044","172.18.1.153:5044","172.18.1.154:5044"]
index: "logstash-%{+yyyy.MM.dd}"
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
Filebeat 启动命令:nohup ./filebeat -e -c filebeat-TA.yml >/dev/null 2>&1 &
二、Logstash
基本配置
Logstash 本身不能建立集群,Filebeat 连接 Logstash 后会自动轮询 Logstash 服务器是否可用,把数据发送到可用的 Logstash 服务器上面去
Logstash 配置,监听5044端口,接收 Filebeat 发送过来的日志,然后利用 grok 对日志过滤,根据不同的日志设置不同的 type,并将日志存储到 Elasticsearch 集群上面
项目日志跟nginx日志配置在一起,elasticsearch 配置的索引 index 里面不能大写,不然会出现奇怪的bug
input {
beats {
port => "5044"
}
}
filter {
date {
match => ["@timestamp", "yyyy-MM-dd HH:mm:ss"]
}
grok {
match => {
"source" => "(?<type>([A-Za-z]*-[A-Za-z]*-[A-Za-z]*)|([A-Za-z]*-[A-Za-z]*)|access|error)"
}
}
}
output {
# 针对不同的项目日志需要写不同的判断项
if [type] == "MS-System-OTA"{
elasticsearch {
hosts => ["172.18.1.152:9200","172.18.1.153:9200","172.18.1.154:9200"]
index => "logstash-ms-system-ota-%{+YYYY.MM.dd}"
}
}else if [type] == "access" or [type] == "error"{
elasticsearch {
hosts => ["172.18.1.152:9200","172.18.1.153:9200","172.18.1.154:9200"]
index => "logstash-nginx-%{+YYYY.MM.dd}"
}
}else{
elasticsearch {
hosts => ["172.18.1.152:9200","172.18.1.153:9200","172.18.1.154:9200"]
}
}
stdout {
codec => rubydebug
}
}
logstash 的 grok-patterns
USERNAME [a-zA-Z0-9._-]+
USER %{USERNAME}
INT (?:[+-]?(?:[0-9]+))
BASE10NUM (?<![0-9.+-])(?>[+-]?(?:(?:[0-9]+(?:.[0-9]+)?)|(?:.[0-9]+)))
NUMBER (?:%{BASE10NUM})
BASE16NUM (?<![0-9A-Fa-f])(?:[+-]?(?:0x)?(?:[0-9A-Fa-f]+))
BASE16FLOAT (?<![0-9A-Fa-f.])(?:[+-]?(?:0x)?(?:(?:[0-9A-Fa-f]+(?:.[0-9A-Fa-f]*)?)|(?:.[0-9A-Fa-f]+)))
POSINT (?:[1-9][0-9]*)
NONNEGINT (?:[0-9]+)
WORD w+
NOTSPACE S+
SPACE s*
DATA .*?
GREEDYDATA .*
QUOTEDSTRING (?>(?<!\\)(?>"(?>\\.|[^\\"]+)+"|""|(?>‘(?>\\.|[^\\‘]+)+‘)|‘‘|(?>`(?>\\.|[^\\`]+)+`)|``))
UUID [A-Fa-f0-9]{8}-(?:[A-Fa-f0-9]{4}-){3}[A-Fa-f0-9]{12}
# Networking
MAC (?:%{CISCOMAC}|%{WINDOWSMAC}|%{COMMONMAC})
CISCOMAC (?:(?:[A-Fa-f0-9]{4}.){2}[A-Fa-f0-9]{4})
WINDOWSMAC (?:(?:[A-Fa-f0-9]{2}-){5}[A-Fa-f0-9]{2})
COMMONMAC (?:(?:[A-Fa-f0-9]{2}:){5}[A-Fa-f0-9]{2})
IPV6 ((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3})|:))|(([0-9A-Fa-f]{1,4}:){5}(((:[0-9A-Fa-f]{1,4}){1,2})|:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3})|:))|(([0-9A-Fa-f]{1,4}:){4}(((:[0-9A-Fa-f]{1,4}){1,3})|((:[0-9A-Fa-f]{1,4})?:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){3}(((:[0-9A-Fa-f]{1,4}){1,4})|((:[0-9A-Fa-f]{1,4}){0,2}:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){2}(((:[0-9A-Fa-f]{1,4}){1,5})|((:[0-9A-Fa-f]{1,4}){0,3}:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){1}(((:[0-9A-Fa-f]{1,4}){1,6})|((:[0-9A-Fa-f]{1,4}){0,4}:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:))|(:(((:[0-9A-Fa-f]{1,4}){1,7})|((:[0-9A-Fa-f]{1,4}){0,5}:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:)))(%.+)?
IPV4 (?<![0-9])(?:(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2})[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]{1,2}))(?![0-9])
IP (?:%{IPV6}|%{IPV4})
HOSTNAME (?:[0-9A-Za-z][0-9A-Za-z-]{0,62})(?:.(?:[0-9A-Za-z][0-9A-Za-z-]{0,62}))*(.?|)
HOST %{HOSTNAME}
IPORHOST (?:%{HOSTNAME}|%{IP})
HOSTPORT %{IPORHOST}:%{POSINT}
# paths
PATH (?:%{UNIXPATH}|%{WINPATH})
UNIXPATH (?>/(?>[w_%[email protected]:.,-]+|\\.)*)+
TTY (?:/dev/(pts|tty([pq])?)(w+)?/?(?:[0-9]+))
WINPATH (?>[A-Za-z]+:|\\)(?:\\[^\\?*]*)+
URIPROTO [A-Za-z]+(+[A-Za-z+]+)?
URIHOST %{IPORHOST}(?::%{POSINT:port})?
# uripath comes loosely from RFC1738, but mostly from what Firefox
# doesn‘t turn into %XX
URIPATH (?:/[A-Za-z0-9$.+!*‘(){},~:;[email protected]#%_-]*)+
#URIPARAM ?(?:[A-Za-z0-9]+(?:=(?:[^&]*))?(?:&(?:[A-Za-z0-9]+(?:=(?:[^&]*))?)?)*)?
URIPARAM ?[A-Za-z0-9$.+!*‘|(){},[email protected]#%&/=:;_?-[]]*
URIPATHPARAM %{URIPATH}(?:%{URIPARAM})?
URI %{URIPROTO}://(?:%{USER}(?::[^@]*)[email protected])?(?:%{URIHOST})?(?:%{URIPATHPARAM})?
# Months: January, Feb, 3, 03, 12, December
MONTH (?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)
MONTHNUM (?:0?[1-9]|1[0-2])
MONTHNUM2 (?:0[1-9]|1[0-2])
MONTHDAY (?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9])
# Days: Monday, Tue, Thu, etc...
DAY (?:Mon(?:day)?|Tue(?:sday)?|Wed(?:nesday)?|Thu(?:rsday)?|Fri(?:day)?|Sat(?:urday)?|Sun(?:day)?)
# Years?
YEAR (?>dd){1,2}
HOUR (?:2[0123]|[01]?[0-9])
MINUTE (?:[0-5][0-9])
# ‘60‘ is a leap second in most time standards and thus is valid.
SECOND (?:(?:[0-5]?[0-9]|60)(?:[:.,][0-9]+)?)
TIME (?!<[0-9])%{HOUR}:%{MINUTE}(?::%{SECOND})(?![0-9])
# datestamp is YYYY/MM/DD-HH:MM:SS.UUUU (or something like it)
DATE_US %{MONTHNUM}[/-]%{MONTHDAY}[/-]%{YEAR}
DATE_EU %{MONTHDAY}[./-]%{MONTHNUM}[./-]%{YEAR}
ISO8601_TIMEZONE (?:Z|[+-]%{HOUR}(?::?%{MINUTE}))
ISO8601_SECOND (?:%{SECOND}|60)
TIMESTAMP_ISO8601 %{YEAR}-%{MONTHNUM}-%{MONTHDAY}[T ]%{HOUR}:?%{MINUTE}(?::?%{SECOND})?%{ISO8601_TIMEZONE}?
DATE %{DATE_US}|%{DATE_EU}
DATESTAMP %{DATE}[- ]%{TIME}
TZ (?:[PMCE][SD]T|UTC)
DATESTAMP_RFC822 %{DAY} %{MONTH} %{MONTHDAY} %{YEAR} %{TIME} %{TZ}
DATESTAMP_RFC2822 %{DAY}, %{MONTHDAY} %{MONTH} %{YEAR} %{TIME} %{ISO8601_TIMEZONE}
DATESTAMP_OTHER %{DAY} %{MONTH} %{MONTHDAY} %{TIME} %{TZ} %{YEAR}
DATESTAMP_EVENTLOG %{YEAR}%{MONTHNUM2}%{MONTHDAY}%{HOUR}%{MINUTE}%{SECOND}
# Syslog Dates: Month Day HH:MM:SS
SYSLOGTIMESTAMP %{MONTH} +%{MONTHDAY} %{TIME}
PROG (?:[w._/%-]+)
SYSLOGPROG %{PROG:program}(?:[%{POSINT:pid}])?
SYSLOGHOST %{IPORHOST}
SYSLOGFACILITY <%{NONNEGINT:facility}.%{NONNEGINT:priority}>
HTTPDATE %{MONTHDAY}/%{MONTH}/%{YEAR}:%{TIME} %{INT}
# Shortcuts
QS %{QUOTEDSTRING}
# Log formats
SYSLOGBASE %{SYSLOGTIMESTAMP:timestamp} (?:%{SYSLOGFACILITY} )?%{SYSLOGHOST:logsource} %{SYSLOGPROG}:
COMMONAPACHELOG %{IPORHOST:clientip} %{USER:ident} %{USER:auth} [%{HTTPDATE:timestamp}] "(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})" %{NUMBER:response} (?:%{NUMBER:bytes}|-)
COMBINEDAPACHELOG %{COMMONAPACHELOG} %{QS:referrer} %{QS:agent}
# Log Levels
LOGLEVEL ([Aa]lert|ALERT|[Tt]race|TRACE|[Dd]ebug|DEBUG|[Nn]otice|NOTICE|[Ii]nfo|INFO|[Ww]arn?(?:ing)?|WARN?(?:ING)?|[Ee]rr?(?:or)?|ERR?(?:OR)?|[Cc]rit?(?:ical)?|CRIT?(?:ICAL)?|[Ff]atal|FATAL|[Ss]evere|SEVERE|EMERG(?:ENCY)?|[Ee]merg(?:ency)?)
针对几个不同的message写的几个grok demo 读取日志文件
- 对于 nginx 的 error.log 的 message 的处理
# message: 2018/09/18 16:33:51 [error] 15003#0: *545757 no live upstreams while connecting to upstream, client: 39.108.4.83, server: dev-springboot-admin.tvflnet.com, request: "POST /instances HTTP/1.1", upstream: "http://localhost/instances", host: "dev-springboot-admin.tvflnet.com" filter { #定义数据的格式 grok { match => { "message" => "%{DATA:timestamp} [%{DATA:level}] %{DATA:nginxmessage}, client: %{DATA:client}, server: %{DATA:server}, request: "%{DATA:request}", upstream: "%{DATA:upstream}", host: "%{DATA:host}""} } }
- 对于 nginx 的 error.log 的 message 的处理
# message: 2018/04/19 20:40:27 [error] 4222#0: *53138 open() "/data/local/project/WebSites/AppOTA/theme/js/frame/layer/skin/default/icon.png" failed (2: No such file or directory), client: 218.17.216.171, server: dev-app-ota.tvflnet.com, request: "GET /theme/js/frame/layer/skin/default/icon.png HTTP/1.1", host: "dev-app-ota.tvflnet.com", referrer: "http://dev-app-ota.tvflnet.com/theme/js/frame/layer/skin/layer.css" filter { #定义数据的格式 grok { match => { "message" => "%{DATA:timestamp} [%{DATA:level}] %{DATA:nginxmessage}, client: %{DATA:client}, server: %{DATA:server}, request: "%{DATA:request}", host: "%{DATA:host}", referrer: "%{DATA:referrer}""} } }
- 对于 lua 的 error.log 的 message 的处理
# message: 2018/09/05 18:02:19 [error] 2325#0: *17083157 [lua] PushFinish.lua:38: end push statistics, client: 119.137.53.205, server: dev-system-ota-statistics.tvflnet.com, request: "POST /upgrade/push HTTP/1.1", host: "dev-system-ota-statistics.tvflnet.com" filter { #定义数据的格式 grok { match => { "message" => "%{DATA:timestamp} [%{DATA:level}] %{DATA:luamessage}, client: %{DATA:client}, server: %{DATA:server}, request: "%{DATA:request}", host: "%{DATA:host}""} } }
- 对于 电视端接口日志的 message 的处理
# message: traceid:[Thread:943-sn:sn-mac:mac] 2018-09-18 11:07:03.525 DEBUG com.flnet.utils.web.log.DogLogAspect 55 - Params-参数(JSON):{"backStr":"{"groupid":5}","build":201808310938,"ip":"119.147.146.189","mac":"mac","modelCode":"SHARP_0_50#SHARP#IQIYI#LCD_50SUINFCA_H","sn":"sn","version":"modelCode"} filter { #定义数据的格式 grok { match => { "message" => "traceid:%{DATA:traceid}[Thread:%{DATA:thread}-sn:%{DATA:sn}-mac:%{DATA:mac}] %{TIMESTAMP_ISO8601:timestamp} %{DATA:level} %{GREEDYDATA:message}"} } }
- 对于 项目日志的 message 的处理
对于多项 不同的匹配配置多个grok# message: traceid:[] 2018-09-14 02:14:48.209 WARN de.codecentric.boot.admin.client.registration.ApplicationRegistrator 115 - Failed to register application as Application(name=ta-system-ota, managementUrl=http://TV-DEV-API01:10005/actuator, healthUrl=http://TV-DEV-API01:10005/actuator/health, serviceUrl=http://TV-DEV-API01:10005/, metadata={startup=2018-09-10T10:20:41.812+08:00}) at spring-boot-admin ([https://dev-springboot-admin.tvflnet.com/instances]): I/O error on POST request for "https://dev-springboot-admin.tvflnet.com/instances": connect timed out; nested exception is java.net.SocketTimeoutException: connect timed out. Further attempts are logged on DEBUG level filter { #定义数据的格式 grok { match => { "message" => "traceid:[%{DATA:traceid}] %{TIMESTAMP_ISO8601:timestamp} %{DATA:level} %{GREEDYDATA:message}"} } }
Logstash 启动命令:nohup ./bin/logstash -f ./config/conf.d/logstash-simple.conf >/dev/null 2>&1 &Logstash 在线验证地址
以上是关于ELK 做日志分析(filebeat+logstash+elasticsearch)配置的主要内容,如果未能解决你的问题,请参考以下文章
EFLFK——ELK日志分析系统+kafka+filebeat架构
ELK+Filebeat+Kafka+ZooKeeper 构建海量日志分析平台
日志分析系统ELK(elasticsearch+logstash+kibana+filebeat)