linux 搭建ELFK6.8.0集群

Posted 我是一只小小茑

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了linux 搭建ELFK6.8.0集群相关的知识,希望对你有一定的参考价值。


                                  linux 搭建ELFK6.8.0集群

一、环境信息以及安装前准备


1)、组件介绍



1*Filebeat是一个日志文件托运工具,在你的服务器上安装客户端后,filebeat会监控日志目录或者指定的日志文件,追踪读取这些文件(追踪文件的变化,不停的读)

2*Kafka是一种高吞吐量的分布式发布订阅消息系统,它可以处理消费者规模的网站中的所有动作流数据(暂时不用)

3*Logstash是一根具备实时数据传输能力的管道,负责将数据信息从管道的输入端传输到管道的输出端;与此同时这根管道还可以让你根据自己的需求在中间加上滤网,

  Logstash提供里很多功能强大的滤网以满足你    的各种应用场景

4*ElasticSearch它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful web接口

5*Kibana是ElasticSearch的用户界面




在实际应用场景下,为了满足大数据实时检索的场景,利用Filebeat去监控日志文件,将Kafka作为Filebeat的输出端,Kafka实时接收到Filebeat后以Logstash作为输出端输出,到Logstash的数据也许还不是我们想要的格式化或者特定业务的数据,这时可以通过Logstash的一些过了插件对数据进行过滤最后达到想要的数据格式以ElasticSearch作为输出端输出,数据到ElasticSearch就可以进行丰富的分布式检索了



linux



2、环境准备



主机



IP



版本                                                                  



角色



master



192.168.2.222






Elasticsearch1,logstash,kibana



Node1



192.168.2.223






Elasticsearch2



Node2



192.168.2.224






Elasticsearch3



Web1



192.168.2.225






filebeat



Web2












Web3















elastic   ​​https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.8.0.rpm​

kibana       ​​https://artifacts.elastic.co/downloads/kibana/kibana-6.8.0-x86_64.rpm​

logstash  ​​https://artifacts.elastic.co/downloads/logstash/logstash-6.8.0.rpm​

filebeat  ​​https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.8.0-x86_64.rpm​

1

2

3

4

3.安装jdk环境

[root@master ~]#yum -y install java-1.8.0-openjdk

[root@master ~]# java -version



openjdk version "1.8.0_312"

OpenJDK Runtime Environment (build 1.8.0_312-b07)

OpenJDK 64-Bit Server VM (build 25.312-b07, mixed mode)





二、ES集群安装配置

2.1、安装ES

说明:3台机器一起安装

[root@master ~]# rpm -ivh elasticsearch-6.8.0.rpm

[root@node1  ~]# rpm -ivh elasticsearch-6.8.0.rpm

[root@node2  ~]# rpm -ivh elasticsearch-6.8.0.rpm


配置elasticsearch

#配置JVM参数,设置为内存的一般,并且留一半内存给操作系统,默认是1g

[root@master ~]# vim /etc/elasticsearch/jvm.options

-Xms2g 

-Xmx2g


#配置elastic信息,其他节点需要修改node.name和network.host的值

[root@master ~]# cp /etc/elasticsearch/elasticsearch.yml /etc/elasticsearch/elasticsearch.yml_back

[root@master ~]# cd /etc/elasticsearch/

[root@master elasticsearch]# vim elasticsearch.yml


[root@master elasticsearch]#grep  "^[a-z]"  elasticsearch.yml



cluster.name: cict-es

node.name: master

path.data: /data/els

path.logs: /data/log/

network.host: 192.168.2.222

http.port: 9200

discovery.zen.ping.unicast.hosts: ["192.168.2.222", "192.168.2.223","192.168.2.224"]

discovery.zen.minimum_master_nodes: 2



[root@es-node1 config]# vim /etc/security/limits.conf



* soft nofile 65536

* hard nofile 131072

* soft nproc 2048

* hard nproc 4096

* soft memlock unlimited

* hard memlock unlimited




[root@es-node1 config]# vim /etc/sysctl.conf


net.ipv4.tcp_max_tw_buckets = 6000

net.ipv4.ip_local_port_range = 1024 65000

net.ipv4.tcp_tw_reuse = 1

net.ipv4.tcp_tw_recycle = 1

net.ipv4.tcp_fin_timeout = 10

net.ipv4.tcp_syncookies = 1

net.core.netdev_max_backlog = 262144

net.ipv4.tcp_max_orphans = 262144

net.ipv4.tcp_max_syn_backlog = 262144

net.ipv4.tcp_timestamps = 0

net.ipv4.tcp_synack_retries = 1

net.ipv4.tcp_syn_retries = 1

net.ipv4.tcp_keepalive_time = 30

net.ipv4.tcp_mem= 786432 2097152 3145728

net.ipv4.tcp_rmem= 4096 4096 16777216

net.ipv4.tcp_wmem= 4096 4096 16777216



[root@master elasticsearch]# reboot

创建相关目录及授权

[root@master ~]# mkdir -pv /data/els,log/

mkdir: 已创建目录 "/data"

mkdir: 已创建目录 "/data/es/"

[root@master ~]# chown -R elasticsearch:elasticsearch /data/

 启动elasticsearch,三台机器一起启动

[root@master ~]# systemctl start  elasticsearch

[elastic@es-node1 elasticsearch]$ netstat -tnlp


Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name   

tcp        0      0 172.9.201.76:9200      0.0.0.0:*               LISTEN      2072/java    #9200http协议的RESTful接口      

tcp        0      0 172.9.201.76:9300      0.0.0.0:*               LISTEN      2072/java    #9300tcp通讯端口,集群间和TCPClient都走的它     

tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -                  

tcp6       0      0 :::22                   :::*                    LISTEN      -  



在浏览器中访问: http://192.168.2.222:9200



name    "node-76"

cluster_name    "my-es"

cluster_uuid    "FhxctUHqTz6eJZCkDuXwPQ"

version

number  "6.8.0"

build_flavor    "default"

build_type  "rpm"

build_hash  "65b6179"

build_date  "2019-05-15T20:06:13.172855Z"

build_snapshot  false

lucene_version  "7.7.0"

minimum_wire_compatibility_version  "5.6.0"

minimum_index_compatibility_version "5.0.0"

tagline "You Know, for Search"



查看是否是集群:

curl -XGET http://192.168.2.222:9200/_cat/nodes  //随意一台es中可执行,也可更换其中的 ip(这里可223或224)

linux

停掉 192.168.2.222,主节点主动变更为node1节点为主。

故障报错:启动报错。查看系统日志 cat /var/log/messages


Dec 29 10:47:05 node2 systemd: Started Elasticsearch.

Dec 29 10:47:06 node2 elasticsearch: Exception in thread "main" 2021-12-29 10:47:06,091 main ERROR No Log4j 2 configuration file found. Using default configuration (logging only errors to the console), or user programmatically provided configurations. Set system property log4j2.debug to show Log4j 2 internal initialization logging. See ​​https://logging.apache.org/log4j/2.x/manual/configuration.html​​ for instructions on how to configure Log4j 2

Dec 29 10:47:06 node2 elasticsearch: SettingsException[Failed to load settings from /etc/elasticsearch/elasticsearch.yml]; nested: AccessDeniedException[/etc/elasticsearch/elasticsearch.yml];

Dec 29 10:47:06 node2 elasticsearch: at org.elasticsearch.node.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:102)

Dec 29 10:47:06 node2 elasticsearch: at org.elasticsearch.cli.EnvironmentAwareCommand.createEnv(EnvironmentAwareCommand.java:95)

Dec 29 10:47:06 node2 elasticsearch: at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)

Dec 29 10:47:06 node2 elasticsearch: at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124)

Dec 29 10:47:06 node2 elasticsearch: at org.elasticsearch.cli.Command.main(Command.java:90)

Dec 29 10:47:06 node2 elasticsearch: at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:116)

Dec 29 10:47:06 node2 elasticsearch: at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:93)

Dec 29 10:47:06 node2 elasticsearch: Caused by: java.nio.file.AccessDeniedException: /etc/elasticsearch/elasticsearch.yml

Dec 29 10:47:06 node2 elasticsearch: at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)

Dec 29 10:47:06 node2 elasticsearch: at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)

Dec 29 10:47:06 node2 elasticsearch: at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)

Dec 29 10:47:06 node2 elasticsearch: at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)

Dec 29 10:47:06 node2 elasticsearch: at java.nio.file.Files.newByteChannel(Files.java:361)

Dec 29 10:47:06 node2 elasticsearch: at java.nio.file.Files.newByteChannel(Files.java:407)

Dec 29 10:47:06 node2 elasticsearch: at java.nio.file.spi.FileSystemProvider.newInputStream(FileSystemProvider.java:384)

Dec 29 10:47:06 node2 elasticsearch: at java.nio.file.Files.newInputStream(Files.java:152)

Dec 29 10:47:06 node2 elasticsearch: at org.elasticsearch.common.settings.Settings$Builder.loadFromPath(Settings.java:1112)

Dec 29 10:47:06 node2 elasticsearch: at org.elasticsearch.node.InternalSettingsPreparer.prepareEnvironment(InternalSettingsPreparer.java:100)

Dec 29 10:47:06 node2 elasticsearch: ... 6 more

Dec 29 10:47:06 node2 systemd: elasticsearch.service: main process exited, code=exited, status=1/FAILURE

Dec 29 10:47:06 node2 systemd: Unit elasticsearch.service entered failed state.

Dec 29 10:47:06 node2 systemd: elasticsearch.service failed.


node节点重新安装一遍即可


三、安装并配置Kibana

[root@shtw-kibana01 ~]# rpm -ivh kibana-6.8.0-x86_64.rpm


配置kibana

说明:虽然我们搭建了三台es的集群,但是我们在kibana.yml的elasticsearch.hosts只能配置一台es的主机,所以我们这边一般配置master这一台。

[root@shtw-kibana01 ~]# cd /etc/kibana

[root@shtw-kibana01 ~]# cp kibana.yml kibana.yml-bak

[root@shtw-kibana01 ~]# vim kibana.yml


server.port: 5601                                  

server.host: "192.168.2.222"             

elasticsearch.hosts: ["​​http://192.168.2.222:9200​​"]    

i18n.locale: "zh-CN"


[root@shtw-kibana01  ~]# systemctl start kibana


四 汉化kibana

7.x版本官方自带汉化资源文件(位于kibana目录下的node_modules/x-pack/plugins/translations/translations/目录,所以我们6.8.0是自带汉化目录的,接下来我们要做的是:

[root@shtw-kibana01 translations]# cd /usr/share/kibana/node_modules/x-pack/plugins/translations

[root@shtw-kibana01 translations]# cp -r translations /usr/share/kibana/src/legacy/core_plugins/kibana

[root@shtw-kibana01 translations]# cd /etc/kibana/kibana.yml  #修改配置文件

i18n.locale: "zh-CN"  #默认是en


五、安装logstash以及配置

[root@shtw-logstash01 ~]# rpm -ivh logstash-6.8.0.rpm

配置logstash

[root@shtw-logstash01 logstash]#  cd /etc/logstash

[root@shtw-logstash01 logstash]#  vim logstash.yml



path.data: /data/logstash       #配置数据路径

http.host: "192.168.2.222"     #配置主机名

path.logs: /var/log/logstash  #配置日志路径




配置conf文件

cat /etc/logstash/conf.d/server.conf
(通过logstash采集本机日志)



input

    file

        path => "/var/log/httpd/*log"   #需要有权限查看,不然采集不到

        type => "httpd"

        start_position => "beginning"

       

output    

        elasticsearch

        hosts => ["192.168.2.222:9200"]

        index => "httpd-%+YYYY.MM.dd"

       



[root@master ~]#  mkdir /data/ logstash

[root@master ~]chown -R logstash.logstash logstash/

[root@master ~]# systemctl start logstash

或者

[root@master ~]# logstash -f /etc/logstash/conf.d/server.conf

查看日志索引

linux


linux


linux


linux

六 安装filebeat (主要负责收集日志)

[root@web1 ~]# rpm -ivh filebeat-6.8.0-x86_64.rpm


配置filebeat

[root@web1 ~]# cd /etc/filebeat

[root@web1 filebeat]# cp filebeat.yml  filebeat.yml-bak

[root@web1 filebeat]# vim filebeat.yml



#=========================== Filebeat inputs =============================

filebeat.inputs:

- type: log

  enabled: true

  paths:

    - /var/log/httpd/*log

#----------------------------- Logstash output --------------------------------

output.logstash:

  # The Logstash hosts

  hosts: ["192.168.2.222:5044"]  #指定Logstash服务器ip和端口






[root@web1 filebeat]# systemctl start filebeat

在Logstash节点上新建一个Logstash配置文件对接filebeat

[root@master conf.d]# cat /etc/logstash/conf.d/logstash.conf



input

    beats

        port => "5044"

output

    elasticsearch

        hosts => ["192.168.2.222:9200"]

        index => "%[fields][service_name]-%+YYYY.MM.dd"

    stdout

        codec => rubydebug

   




#启动 logstash

[root@master conf.d]# systemctl restart  logstash

[root@master conf.d]# netstat -ntpl

linux



linux


以上是关于linux 搭建ELFK6.8.0集群的主要内容,如果未能解决你的问题,请参考以下文章

MongoDB分片集群搭建

Linux之使用haproxy搭建web群集

5. Redis集群搭建

linux集群搭建之nfs服务的搭建

Linux集群架构

如何成功创建Linux集群