Logstash+elasticsearch+elastic+nignx

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Logstash+elasticsearch+elastic+nignx相关的知识,希望对你有一定的参考价值。

注:本系统使用的是Logstash+elasticsearch+elastic+nignx 进行日志分析、展示

 

 

1环境版本:... 2

1.1主机:... 2

1.2前提:... 2

2 Logstash配置... 2

3 Kibanaelasticsearch的启动... 6

3.1 elasticsearch. 6

3.2 kibana. 7

4 nginx的配置... 7

 


 

1环境版本:

  • 操作系统:CentOS 7.2.1511

  • 内核:Linux Logs3.10.0-123.9.3.el7.x86_64

  • JDK: 1.8.0_74

  • logstash-2.2.2

下载地址(github)https://github.com/elastic/logstash/tree/2.2

功能:对输入日志进行收集、分析,并将其存储供以后使用(如,搜索)。

  • elasticsearch-2.2.0

功能:对logstash分析结果的输入提供进行自定义搜索

下载地址(github)https://github.com/elastic/elasticsearch/tree/2.2

  • kibana-4.4.1

功能:连接elasticsearch-2.2.0,提供web界面

下载地址(github)https://github.com/elastic/kibana/tree/4.4

  • nginx: 1.9.12

         kibana 的端口转发到 80,并定义好访问用的域名。

 

1.1主机:

web1: 10.46.90.80(内网)xx.xx.xx.xx(外网)

logs: 10.46.90.147(内网),xx.xx.xx.xx(外网)

 

1.2前提:
  • Nfs

logs搭建好nfs,共享 /opt/logs,挂载到 web1 /home/wwwlogsweb1 php 日志直接输出到 /home/wwwlogs/*/

logstash kibana elasticsearch都下载到 /opt/

 

  • JDK已经安装

 

  • 安装好nginx

 

2 Logstash配置

Logstash 可以git下载到本地直接使用,其配置是最主要的,它会对日志进行收集、分析,并将其存储供以后使用(如,搜索)。


logstash 的 shipper.conf 配置文件 grok 筛选都使用 ruby 正则表达式,在此推荐一个guby 正则表达式模拟器http://www.rubular.com/


新建配置文件并配置:

[[email protected] ~]# mkdir /opt/logstash/conf.d

[[email protected] ~]# vi /opt/logstash/conf.d/shipper.conf

input {

       #stdin {

       #}

  #file {

       #path  =>"/opt/logs/*/*_nginx.log"

       #type => "access"

       #codec => json

   #}

   file {

                path  => "/opt/logs/php/admin.etcchebao.com/*.log"

                #path  =>"/opt/logs/php/admin.etcchebao.com/admin.log"

                type => "admin"

                codec => multiline {

                # Grok pattern names are valid!:)

                        pattern => "^\[\d{4}"           #开头匹配[+4个年份字符

                        #pattern =>"^%{TIMESTAMP_ISO8601} "

                        negate => true

                        what => previous

                }

    }

 

   file {

                path  => "/opt/logs/php/passport.etcchebao.com/*.log"

                #path  =>"/opt/logs/php/passport.etcchebao.com/passport.log"

                type => "passport"

                codec => multiline {

                # Grok pattern names are valid!:)

                        pattern => "^\[\d{4}"           #开头匹配[+4个年份字符

                        #pattern =>"^%{TIMESTAMP_ISO8601} "

                        negate => true

                        what => previous

                }

    }

 

   file {

                path  => "/opt/logs/php/push.etcchebao.com/*.log"

                #path  =>"/opt/logs/php/push.etcchebao.com/push.log"

                type => "push"

                codec => multiline {

                # Grok pattern names are valid!:)

                        pattern =>"^\[\d{4}"           #开头匹配[+4个年份字符

                        #pattern =>"^%{TIMESTAMP_ISO8601} "

                        negate => true

                        what => previous

                }

    }

 

   file {

                path  => "/opt/logs/php/seller.etcchebao.com/*.log"

                #path  =>"/opt/logs/php/seller.etcchebao.com/seller.log"

                type => "seller"

                codec => multiline {

                # Grok pattern names are valid!:)

                        pattern =>"^\[\d{4}"           #开头匹配[+4个年份字符

                        #pattern =>"^%{TIMESTAMP_ISO8601} "

                        negate => true

                        what => previous

                }

    }

 

   file {

                path  => "/opt/logs/php/m.etcchebao.com/*.log"

                #path  =>"/opt/logs/php/m.etcchebao.com/m.log"

                type => "m"

                codec => multiline {

                # Grok pattern names are valid!:)

                        pattern =>"^\[\d{4}"           #开头匹配[+4个年份字符

                        #pattern =>"^%{TIMESTAMP_ISO8601} "

                        negate => true

                        what => previous

                }

    }

 

   file {

                path  => "/opt/logs/php/pay.etcchebao.com/*.log"

                #path  =>"/opt/logs/php/pay.etcchebao.com/pay.log"

                type => "pay"

                codec => multiline {

                # Grok pattern names are valid!:)

                        pattern =>"^\[\d{4}"           #开头匹配[+4个年份字符

                        #pattern =>"^%{TIMESTAMP_ISO8601} "

                        negate => true

                        what => previous

                }

    }

}

 

filter {

#      if [type] == "access" {

#               grok {

#                       match => {"message" => "%{COMBINEDAPACHELOG}" }

#               }

#               date {

#                       match => ["timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]

#               }

#      }

       grok {

                match => [

                        #404错误

                        "message","\:(?<Error_class>\d{3}?)\]",

                        #Error错误

                        "message","\[(?<Error_class>\Error?)\]",

                        #500错误

                        "message","系统(?<Error_class>\d{3}?)错误\.*ERROR_NO:(?<err_no>[0-9]*$?).*ERROR_STR:(?<err_str>.*$?)\\.*ERROR_LINE:(?<err_line>[0-9]*$?).*ERROR_FILE:(?<err_file>\\.*$?)\\n"

                ]

       }

}

 

#输出到redis

#output {

#   redis {

#       host => "127.0.0.1"

#       port => "6379"

#      type => "nginx-log"

#       data_type => "list"

#       key => "logstash"

#   }

#}

 

#输出到elasticsearch

output {

   elasticsearch {

       #hosts => ["127.0.0.1:9300"]

       hosts => "127.0.0.1"

       index => "logstash-%{type}-%{+YYYY.MM.dd}"

       document_type => "%{type}"

       #workers => 1

       #flush_size => 20000

       #idle_flush_time => 10

       #template_overwrite => true

    }

#  if [Error_class] != "404" {

#   exec {

#                       #command => "echo‘%{timestamp}:%{message}‘ | mail -s ‘Log_error: HttpException error‘[email protected]"

#                       command => "echo‘%{timestamp}:%{message}‘ | mail -s ‘Log_error: HttpException [HttpException]‘[email protected]"

#      }

#   }

}

 

output{

   if[Error_class] != "404" {

   exec {

                        #command =>"echo ‘%{timestamp}:%{message}‘ | mail -s ‘Log_error: HttpException error‘[email protected]"

                        command =>"echo ‘%{timestamp}:%{message}‘ | mail -s ‘Log_error: HttpException[HttpException]‘ [email protected]"

       }

    }

}

 

#屏幕输出-test

output {

       stdout {

                codec => rubydebug

       }

}

 

Logstash的启动

[[email protected] ~]# nohup/opt/logstash/bin/logstash -f /opt/logstash/conf.d/shipper.conf > /dev/null2>&1 &

检查启动情况:

技术分享 

3 Kibanaelasticsearch 的启动

kibana elasticsearch 都无需安装,只要下载到本地即可直接使用,最好先启动logstash。要注意的是,但默认不允许使用 root帐号启动,所以使用专门运行nginxwww用户启动。

3.1 elasticsearch

[[email protected] ~]$ nohup/opt/elasticsearch-2.2.0/bin/elasticsearch > /dev/null 2>&1 &

[[email protected] ~]$ ps -elf|grep elasticsearch

检查进程:

技术分享

检查端口:

技术分享

 

3.2 kibana

[[email protected] ~]$ nohup /opt/kibana/bin/kibana> /dev/null 2>&1 &

[[email protected] ~]$ ps -elf|grep kibana

检查进程:

技术分享 

检查端口:

技术分享

 

4 Nginx的配置

[[email protected] ~]$ vi/usr/local/nginx/conf/vhost/logs.etcchebao.cn.conf

server {

   listen 80;

   server_name logs.etcchebao.cn;

 

    location/ {

       auth_basic "secret";

       auth_basic_user_file /usr/local/nginx/logs_etcchebao.passwd;

       proxy_pass http://127.0.0.1:5601;

    }

}


本文出自 “蒲芦” 博客,请务必保留此出处http://7116805.blog.51cto.com/7106805/1783029

以上是关于Logstash+elasticsearch+elastic+nignx的主要内容,如果未能解决你的问题,请参考以下文章

ES 译文之如何使用 Logstash 实现关系型数据库与 ElasticSearch 之间的数据同步

ES 译文之如何使用 Logstash 实现关系型数据库与 ElasticSearch 之间的数据同

Linux ELK日志分析系统 | logstash日志收集 | elasticsearch 搜索引擎 | kibana 可视化平台 | 架构搭建 | 超详细

震惊全网的ELK日志分析系统(齐全详细理论+搭建步骤图释)

震惊全网的ELK日志分析系统(齐全详细理论+搭建步骤图释)

Elasticsearch配置使用