ELK的安装配置使用
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了ELK的安装配置使用相关的知识,希望对你有一定的参考价值。
ELK的安装配置
一、ES集群的安装:
搭建ElasticSearch集群: 使用三台服务器搭建集群
node-1(主节点) 10.170.13.1
node-2(从节点) 10.116.35.133
node-3(从节点) 10.44.79.57
下载安装包 地址:https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.4.3.rpm 在三台服务器上分别下载安装elasticsearch-5.4.3.rpm
安装: ~]# yum install elasticsearch-5.4.3.rpm -y 修改node-1配置文件: ~]# vim /etc/elasticsearch/elasticsearch.yml cluster.name: elasticsearch node.name: node-1 network.host: 0.0.0.0 http.port: 9200 http.cors.enabled: true http.cors.allow-origin: "*" node.master: true node.data: true discovery.zen.ping.unicast.hosts: ["10.170.13.1"] 修改node-2配置文件: ~]# vim /etc/elasticsearch/elasticsearch.yml cluster.name: elasticsearch node.name: node-2 network.host: 0.0.0.0 http.port: 9200 http.cors.enabled: true http.cors.allow-origin: "*" node.master: false node.data: true discovery.zen.ping.unicast.hosts: ["10.170.13.1"] 修改node-3配置文件: ~]# vim /etc/elasticsearch/elasticsearch.yml cluster.name: elasticsearch node.name: node-3 network.host: 0.0.0.0 http.port: 9200 http.cors.enabled: true http.cors.allow-origin: "*" node.master: false node.data: true discovery.zen.ping.unicast.hosts: ["10.170.13.1"] 配置完以后先启动node-1(master)然后再去启动node-2和node-3; 启动: ~]# service elasticsearch start 安装5.X的版本启动会有很多错误可查看日志,根据自身处理错误。 ~]# tail -f /var/log/elasticsearch/elasticsearch.log
错误汇总<部分问题来源于网络,感谢大家的之后,在此汇总一下>: 问题一: [2016-11-06T16:27:21,712][WARN ][o.e.b.JNANatives ] unable to install syscall filter: Java.lang.UnsupportedOperationException: seccomp unavailable: requires kernel 3.5+ with CONFIG_SECCOMPandCONFIG_SECCOMP_FILTERcompiledinatorg.elasticsearch.bootstrap.Seccomp.linuxImpl(Seccomp.java:349) ~[elasticsearch-5.0.0.jar:5.0.0] at org.elasticsearch.bootstrap.Seccomp.init(Seccomp.java:630) ~[elasticsearch-5.0.0.jar:5.0.0] 原因:只是一个警告,主要是因为Linux版本过低造成的。 解决方案:1、重新安装新版本的Linux系统 2、警告不影响使用,可以忽略 问题二: ERROR: bootstrap checks failed max file descriptors [4096] for elasticsearch process likely too low, increase to at least [65536] 原因:无法创建本地文件问题,用户最大可创建文件数太小 解决方案: 切换到root用户,编辑limits.conf配置文件, 添加类似如下内容: vi /etc/security/limits.conf 添加如下内容: * soft nofile 65536* hard nofile 131072* soft nproc 2048* hard nproc 4096备注:* 代表Linux所有用户名称(比如 hadoop) 保存、退出、重新登录才可生效 问题三: max number of threads [1024] for user [es] likely too low, increase to at least [2048] 原因:无法创建本地线程问题,用户最大可创建线程数太小 解决方案:切换到root用户,进入limits.d目录下,修改90-nproc.conf 配置文件。 vi /etc/security/limits.d/90-nproc.conf 找到如下内容: * soft nproc 1024#修改为* soft nproc 2048 问题四: max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144] 原因:最大虚拟内存太小 解决方案:切换到root用户下,修改配置文件sysctl.conf vi /etc/sysctl.conf 添加下面配置: vm.max_map_count=655360并执行命令: sysctl -p 然后重新启动elasticsearch,即可启动成功。 问题五: ElasticSearch启动找不到主机或路由 原因:ElasticSearch 单播配置有问题 解决方案: 检查ElasticSearch中的配置文件 vi config/elasticsearch.yml 找到如下配置: discovery.zen.ping.unicast.hosts: ["10.170.13.1"] 一般情况下,是这里配置有问题,注意书写格式 问题六: org.elasticsearch.transport.RemoteTransportException: Failed to deserialize exception response from stream 原因:ElasticSearch节点之间的jdk版本不一致 解决方案:ElasticSearch集群统一jdk环境 问题七: Unsupported major.minor version 52.0原因:jdk版本问题太低 解决方案:更换jdk版本,ElasticSearch5.0.0支持jdk1.8.0 问题八: bin/elasticsearch-plugin install license ERROR: Unknown plugin license 原因:ElasticSearch5.0.0以后插件命令已经改变 解决方案:使用最新命令安装所有插件 bin/elasticsearch-plugin install x-pack 问题九: 启动异常:ERROR: bootstrap checks failed system call filters failed to install; check the logs and fix your configuration or disable system call filters at your own risk 问题原因:因为Centos6不支持SecComp,而ES5.2.1默认bootstrap.system_call_filter为true进行检测,所以导致检测失败,失败后直接导致ES不能启动。详见 :https://github.com/elastic/elasticsearch/issues/22899解决方法:在elasticsearch.yml中配置bootstrap.system_call_filter为false,注意要在Memory下面: bootstrap.memory_lock: falsebootstrap.system_call_filter: false
启动成功以后在node-1上日志会显示node-2、node-3加入集群:
[2017-07-05T10:49:30,988][INFO ][o.e.c.s.ClusterService ] [node-1] added {{node-2}{9P1tDYlaTTCTLvgf56qiTg}{tDWHLBA5QVKJVigNeDx-yw}{10.116.35.133}{10.116.35.133:9300},}, reason: zen-disco-node-join[{node-2}{9P1tDYlaTTCTLvgf56qiTg}{tDWHLBA5QVKJVigNeDx-yw}{10.116.35.133}{10.116.35.133:9300}] [2017-07-05T10:49:36,927][INFO ][o.e.c.s.ClusterService ] [node-1] added {{node-3}{seEWVcyKRnupt6eP2T3-Qg}{W5RrwtY2ToWxuzWFsFdPyA}{10.44.79.57}{10.44.79.57:9300},}, reason: zen-disco-node-join[{node-3}{seEWVcyKRnupt6eP2T3-Qg}{W5RrwtY2ToWxuzWFsFdPyA}{10.44.79.57}{10.44.79.57 :9300}]
打开浏览器访问测试:
到此ElasticSearch安装成功,下面安装管理工具ElasticSearch-head 注意:ElasticSearch5.X以后的版本差异较大不支持以前2.X那种安装方式
安装nodejs: 下载地址: ~]# wget https://nodejs.org/dist/v8.1.3/node-v8.1.3-linux-x64.tar.gz ~]# tar xf node-v8.1.3-linux-x64.tar.gz -C /usr/local/ ~]# mv /usr/local/ node-v8.1.3-linux-x64 node.js ~]# echo ‘export "PATH=/usr/local/node.js/bin:$PATH"‘ > /etc/profile.d/nodejs.sh ~]# source /etc/profile.d/nodejs.sh ~]# node -vv8.1.3~]# npm -v5.0.3 下载elasticsearch-head: 下载安装: ~]# cd /usr/local/ ~]# git clone git://github.com/mobz/elasticsearch-head.git ~]# cd /usr/local/elasticsearch-head/ elasticsearch-head]# npm install elasticsearch-head]# npm install –g grunt–cli elasticsearch-head]# npm install -g cnpm --registry= 修改配置文件: elasticsearch-head]# vim Gruntfile.jsconnect: { server: { options: { hostname: ‘0.0.0.0‘, port: 9100, base: ‘.‘, keepalive: true } } } elasticsearch-head]# vim _site/app.js this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://10.170.13.1:9200"; 启动: elasticsearch-head]# service elasticsearch stop elasticsearch-head]# service elasticsearch start elasticsearch-head]# grunt server
二、安装logstash,配置监控rsyslog、nginx、es的日志
安装包下载:https://www.elastic.co/downloads/past-releases
可根据不同的版本自行下载,ELK更新很快。 我的ES安装的5.4.3的版本所以统一安装5.4.3的版本。
~]# wgethttps://artifacts.elastic.co/downloads/logstash/logstash-5.4.3.rpm #直接下载rpm包
分别在ES集群的三台机器上都下载安装好 ~]# yum install logstash-5.4.3.rpm -y
(logstash就算直接用rpm安装也需要配置环境变量,目录:/usr/share/logstash/bin/)
配置环境变量:
~]# echo ‘export "PATH=/usr/share/logstash/bin:$PATH"‘ > /etc/profile.d/logstash.sh
~]# source /etc/profile.d/logstash.sh
logstash的配置相对麻烦一下,因为logstash需要接受输入,进行处理然后产生输出。logstash采用input,filter,output的三段配置法。input配置输入源,filter配置对输入源中的信息怎样进行处理,而output配置输出位置。
~]# vim /etc/logstash/conf.d/all.conf 在配置文件里加入配置 input { file { path => "/var/log/messages" #日志文件路径 type => "system" start_position => "beginning" } syslog { type => "system-syslog" #定义type,为后面输出做匹配 host => "10.170.13.1" port => "514" } file { path => "/var/log/elasticsearch/elasticsearch.log" type => "es-error" start_position => "beginning" codec => multiline { #multiline可定义多行为一个事件 pattern => "^\[" negate => "true" what => "previous" } } file { path => "/var/log/nginx/access-json.log" codec => "json" #以json输出 type => "nginx_access" start_position => "beginning" #从日志开头记录(默认是从日志尾部记录的) } } output { if [type] == "system" { #if 匹配type,则执行 elasticsearch { #定义ES hosts => ["10.170.13.1:9200"] #ES主机:端口 index => "system-%{+YYYY.MM.dd}" #定义索引名称 } } if [type] == "system-syslog" { elasticsearch { hosts => ["10.170.13.1:9200"] index => "system-syslog-%{+YYYY.MM.dd}" } } if [type] == "es-error" { elasticsearch { hosts => ["10.170.13.1:9200"] index => "es-error-%{+YYYY.MM.dd}" } } if [type] == "nginx_access" { elasticsearch { hosts => ["10.170.13.1:9200"] index => "nginx_access-%{+YYYY.MM.dd}" } } }
设置系统日志
~]# vim /etc/rsyslog.conf *.* @@10.170.13.1:514 #在最后面添加这行,向某个IP:端口发送系统日志
编辑nginx访问日志为json格式;
~]# vim /etc/nginx/nginx.conf log_format logstash_json ‘{ "@timestamp": "$time_local", ‘ ‘"@fields": { ‘ ‘"remote_addr": "$remote_addr", ‘ ‘"remote_user": "$remote_user", ‘ ‘"body_bytes_sent": "$body_bytes_sent", ‘ ‘"request_time": "$request_time", ‘ ‘"status": "$status", ‘ ‘"request": "$request", ‘ ‘"request_method": "$request_method", ‘ ‘"http_referrer": "$http_referer", ‘ ‘"body_bytes_sent":"$body_bytes_sent", ‘ ‘"http_x_forwarded_for": "$http_x_forwarded_for", ‘ ‘"http_user_agent": "$http_user_agent" } }‘; access_log /var/log/nginx/access-json.log logstash_json;
三、安装Kibana
安装包下载:https://www.elastic.co/downloads/past-releases
~]# wget ~]# yum install kibana-5.4.3-x86_64.rpm -y
kibana安装在(10.170.13.1)这一台设备用来统计分析展示。 这是做实验,真实线上可以在安装ES的设备上都安装Kibana,用前端nginx做转发跟认证。
编辑配置文件
vim /etc/kibana/kibana.yml #添加以下配置 server.port: 5601 server.host: "0.0.0.0" elasticsearch.url: http://10.170.13.1:9200/ kibana.index: ".kibana"
启动Kibana
~]# service kibana start
四、logstash解耦之redis消息队列
生产中有很多场景不能由logstash直接提取日志发送给ES,这时候就可以使用消息队列来做处理了。这就是所谓的解耦把日志“提取”和“处理、展示”分隔开。
输出至redis: ~]# vim /etc/logstash/conf.d/redis_in.conf input { stdin {} } output { redis { host => "127.0.0.1" port => "6379" db => "6" data_type => "list" key => "demo" } } 启动: ~]# logstash -f /etc/logstash/conf.d/redis_in.conf(启动成功以后收到在标准输入里面随便输入一些东西) 启动完成可打开一个终端查看redis ~]# redis-cli 127.0.0.1:6379> SELECT 6 OK 127.0.0.1:6379[6]> LLEN demo (integer) 52 127.0.0.1:6379[6]> KEYS * 1) "demo"输入至ES: ~]# vim /etc/logstash/conf.d/redis_out.conf input { redis { host => "127.0.0.1" port => "6379" db => "6" data_type => "list" key => "demo" } } output { elasticsearch { hosts => ["10.170.13.1:9200"] index => "redis-demo-%{+YYYY.MM.dd}" } } 启动: ~]# logstash -f /etc/logstash/conf.d/redis_out.conf(启动成功以后可查看ES里面是否成功增加) 启动完成可打开一个终端查看redis ~]# redis-cli 127.0.0.1:6379> SELECT 6 OK 127.0.0.1:6379[6]> LLEN demo (integer) 0
可将之前做的rsystem日志、nginx访问日志、ES日志都通过logstash输入redis,再有logstash从redis输出到ES。
输入: input { file { path => "/var/log/messages" type => "system" start_position => "beginning" } syslog { type => "system-syslog" host => "10.170.13.1" port => "514" } file { path => "/var/log/elasticsearch/elasticsearch.log" type => "es-error" start_position => "beginning" codec => multiline { pattern => "^\[" negate => "true" what => "previous" } } file { path => "/var/log/nginx/access-json.log" codec => "json" type => "nginx_access" start_position => "beginning" } } output { if [type] == "system" { redis { host => "10.170.13.1" port => "6379" db => "6" data_type => "list" key => "system" } } if [type] == "system-syslog" { redis { host => "10.170.13.1" port => "6379" db => "6" data_type => "list" key => "system-syslog" } } if [type] == "es-error" { redis { host => "10.170.13.1" port => "6379" db => "6" data_type => "list" key => "es-error" } } if [type] == "nginx_access" { redis { host => "10.170.13.1" port => "6379" db => "6" data_type => "list" key => "nginx_access" } } }
输入: input { redis { type => "system" host => "10.170.13.1" port => "6379" db => "6" data_type => "list" key => "system" } redis { type => "system-syslog" host => "10.170.13.1" port => "6379" db => "6" data_type => "list" key => "system-syslog" } redis { type => "es-error" host => "10.170.13.1" port => "6379" db => "6" data_type => "list" key => "es-error" } redis { type => "nginx_access" host => "10.170.13.1" port => "6379" db => "6" data_type => "list" key => "nginx_access" } } output { if [type] == "system" { elasticsearch { hosts => ["10.170.13.1:9200"] index => "system-%{+YYYY.MM.dd}" } } if [type] == "system-syslog" { elasticsearch { hosts => ["10.170.13.1:9200"] index => "system-syslog-%{+YYYY.MM.dd}" } } if [type] == "es-error" { elasticsearch { hosts => ["10.170.13.1:9200"] index => "es-error-%{+YYYY.MM.dd}" } } if [type] == "nginx_access" { elasticsearch { hosts => ["10.170.13.1:9200"] index => "nginx_access-%{+YYYY.MM.dd}" } } }
五、生产上线ELK:
1、日志分类 系统日志 rsyslog logstash syslog插件 访问日志 nginx logstash codec json 错误日志 file logstash file+ multiline 运行日志 file logstash codec json 设备日志 syslog logstash syslog插件 debug file logstash json or multiline 2、日志标准化 路径 固定 格式 经量json 日志系统: ELK logstash EFK Flume EHK heka 消息队列: redis rabbitmq kafka
本文出自 “志建” 博客,请务必保留此出处http://aoof188.blog.51cto.com/7661673/1949397
以上是关于ELK的安装配置使用的主要内容,如果未能解决你的问题,请参考以下文章