搭建Graylog2集群(基于ElasticSearch的日志收集分析平台)

Posted 酒局下饭

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了搭建Graylog2集群(基于ElasticSearch的日志收集分析平台)相关的知识,希望对你有一定的参考价值。

Graylog2集群有三部分组成,Graylog-server(日志收集、清理、分析)、Mongo DB(用于graylog集群共享配置信息)、ElasticSearch(存储日志)。

Graylog2.4.X只支持ElasticSearch 5,从Graylog2.5.2开始支持ElasticSearch 6,目前最新版本是Graylog3.2.4还没能支持ElasticSearch 7。

这次搭建的Graylog2群集是基于Graylog2.5.2 + Mongo DB 3.6 + ElasticSearch 6来实现的,操作系统为Ubuntu16.04。



环境:Ubuntu 16.04 x64

主机列表

10.0.0.11 graylog01.server.local
10.0.0.12 graylog02.server.local
10.0.0.13 graylog03.server.local

应用分布

10.0.0.11 Graylog-server(master) 2.5.2、MongoDB 3.6、ElasticSearch 6.8
10.0.0.12 Graylog-server 2.5.2、MongoDB 3.6、ElasticSearch 6.8
10.0.0.13 Graylog-server 2.5.2、MongoDB 3.6、ElasticSearch 6.8

1、安装Mongo DB集群(副本集)

(1)所有主机执行以下脚本进行安装并启动

#!/bin/bash

sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 2930ADAE8CAF5059EE73BB4B58712A2291FA4AD5
sudo echo "deb [ arch=amd64,arm64 ] Index of ubuntu xenial/mongodb-org/3.6 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-3.6.list
sudo apt-get update
sudo apt-get install -y mongodb-org
sudo systemctl daemon-reload
sudo systemctl enable mongod.service

echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag

sudo sed -i '/exit 0/d' /etc/rc.local
echo 'echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag' >> /etc/rc.local

test -f /etc/mongod.conf &&\
cp /etc/mongod.conf /etc/mongod.conf.$$ &&\
sed -r -i 's/bindIp:.*$/bindIp: 0.0.0.0/g' /etc/mongod.conf

grep 'mongo_cluster' /etc/mongod.conf >/dev/null 2>&1 ||\
echo '#mongo_cluster
replication:
replSetName: rs0' >> /etc/mongod.conf

(2)登陆10.0.0.11执行以下命令,加入主、从、仲裁三节点,创建集群:

mongo --port 27017
#查看本节点配置:
rs.initiate()
#添加其他节点:
rs.add("graylog02.server.local:27017")
rs.add("graylog03.server.local:27017")
#查看添加完的配置:
rs.config()
#查看状态:
rs.status()

(3)创建graylog数据库:

#创建数据库:

注意要在主节点加

rs0:PRIMARY>

use graylog
db.createUser(
{
user: "graylog",
pwd: "password",
roles: [ { role: "readWrite", db: "graylog" } ]
}
)

db.grantRolesToUser( "graylog" , [ { role: "dbAdmin", db: "graylog" } ])
show users
db.auth("graylog","password")

2、安装ElasticSearch集群:

(1)所有主机上安装JDK并运行此脚本:

#!/bin/bash

JAVA_HOME=/data/jdk1.8.0_202

echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
sudo apt-get update && sudo apt-get install elasticsearch -y
sudo /bin/systemctl daemon-reload
sudo /bin/systemctl enable elasticsearch.service

test -f /etc/elasticsearch/log4j2.properties &&\
sed -r -i 's/^logger.action.level.*$/logger.action.level = info/g' /etc/elasticsearch/log4j2.properties

es_config='/etc/default/elasticsearch'
test -f ${es_config} &&\
sed -r -i '/^DATA_DIR=/d' ${es_config} &&\
sudo echo 'DATA_DIR=/data/elasticsearch/' >> ${es_config}

grep 'vm.max_map_count' /etc/sysctl.conf >/dev/null 2>&1 ||\
echo 'vm.max_map_count=655360
fs.file-max=655360
vm.max_map_count=262144
vm.swappiness = 0' >> /etc/sysctl.conf

sudo /sbin/sysctl -p

es_config='/etc/elasticsearch/elasticsearch.yml'
test -f ${es_config} &&\
cp ${es_config} ${es_config}.$$ &&\
hostname=`hostname`
echo 'cluster.name: graylog
node.name: HOSTNAME
network.host: 0.0.0.0
#集群中的master
discovery.zen.ping.unicast.hosts: ["graylgo01.server.local:9300","graylgo02.server.local:9300","graylgo03.server.local:9300"]
#可发现的主节点node/2+1算出
discovery.zen.minimum_master_nodes: 2
node.master: true
node.data: true
bootstrap.system_call_filter: false
http.cors.enabled: true
http.cors.allow-origin: "*"
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
gateway.recover_after_nodes: 3
gateway.expected_nodes: 3
gateway.recover_after_time: 5m' > ${es_config}
test -f ${es_config} && sed -r -i "s/${hostname}/HOSTNAME/g"

sudo mkdir -p /data/elasticsearch/ && sudo chown -R elasticsearch.elasticsearch /data/elasticsearch/
es_default_config='/etc/default/elasticsearch'
sudo chown elasticsearch.elasticsearch -R /usr/share/elasticsearch/bin/
sudo chown elasticsearch.elasticsearch ${es_default_config}
grep -E '^ES_PATH_CONF' ${es_default_config} >/dev/null 2>&1 ||\
echo 'ES_PATH_CONF=/etc/elasticsearch' >> ${es_default_config}

mydate=`date -d now +"%F_%H-%M-%S"`
es_config='/etc/elasticsearch/elasticsearch.yml'
test -f ${es_config} && sudo cp ${es_config} ${es_config}.${mydate}

grep -E '^path.data' ${es_config} >/dev/null 2>&1 ||\
echo 'path.data: /data/elasticsearch' >> ${es_config}
grep -E '^path.logs' ${es_config} >/dev/null 2>&1 ||\
echo 'path.logs: /var/log/elasticsearch' >> ${es_config}

grep -E '^gateway.recover_after_nodes' ${es_config} >/dev/null 2>&1 ||\
echo 'gateway.recover_after_nodes: 3' >> ${es_config}
grep -E '^gateway.expected_nodes' ${es_config} >/dev/null 2>&1 ||\
echo 'gateway.expected_nodes: 3' >> ${es_config}
grep -E '^gateway.recover_after_time' ${es_config} >/dev/null 2>&1 ||\
echo 'gateway.recover_after_time: 5m' >> ${es_config}

(2)修改内存配置:

sudo vi /etc/elasticsearch/jvm.options
-Xms4g
-Xmx4g

#设置成内存的一半

#查看ES节点状态:
curl 'http://127.0.0.1:9200/_cluster/health?pretty=true'
curl 'http://127.0.0.1:9200/_cat/nodes?v'
#查看群集节点
curl 'localhost:9200/_cat/nodes?v'
#查看群集状态
curl -s -XGET 'http://localhost:9200/_cluster/stats?pretty'
#查看分片状态
curl -XGET http://localhost:9200/_cat/shards

(3)所有主机上启动elasticsearch服务

sudo service elasticsearch start

3、安装graylog群集:

(1)在每个主机上运行以下脚本进行安装:

#!/bin/bash

tmp_deb='/tmp/graylog-2.5-repository_latest.deb'
trap "exit 1" HUP INT PIPE QUIT TERM
trap "test -f ${tmp_deb} && rm -f ${tmp_deb}" EXIT

wget https://packages.graylog2.org/repo/packages/graylog-2.5-repository_latest.deb -O ${tmp_deb}
sudo dpkg -i ${tmp_deb}
sudo apt-get update && sudo apt-get install graylog-server
sudo systemctl daemon-reload
sudo systemctl enable graylog-server.service

ipaddr=`/sbin/ip addr list|grep -oP '\d{1,3}(\.\d{1,3}){3}'|grep -Ev '^127|255$'|head -n1`
test -f /etc/graylog/server/server.conf &&\
cp /etc/graylog/server/server.conf /etc/graylog/server/server.conf.$$ &&\
echo "root_timezone = Asia/Shanghai
is_master = false
node_id_file = /etc/graylog/server/node-id
password_secret =
root_password_sha2 =
plugin_dir = /usr/share/graylog-server/plugin
web_listen_uri = http://0.0.0.0:9000/
rest_listen_uri = http://0.0.0.0:9000/api/
rest_transport_uri = http://${ipaddr}:9000/api/
web_endpoint_uri = http://graylog.server.local:9000/api/
web_enable = true
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_shards = 4
elasticsearch_replicas = 1
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = false
allow_highlighting = false
elasticsearch_analyzer = standard
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 20
outputbuffer_processors = 40
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://graylog:password@graylog01.server.local:27017,graylog02.server.local:27017,graylog03.server.local:27017/graylog?replicaSet=rs0
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5
content_packs_dir = /usr/share/graylog-server/contentpacks
content_packs_auto_load = grok-patterns.json
proxied_requests_thread_pool_size = 32
elasticsearch_hosts = http://graylog01.server.local:9200,http://graylog02.server.local:9200,http://graylog03.server.local:9200
elasticsearch_discovery_enabled = false" > /etc/graylog/server/server.conf

(2)生成网站登录密码:

sudo apt-get install -y apt-transport-https uuid-runtime pwgen

获取password_secret执行:

pwgen -N 1 -s 96

获取root_password_sha2执行:

echo -n yourpassword | sha256sum

(3)将上边生成的password_secret和root_password_sha2值加入到graylog的配置文件里:

vim /etc/graylog/server/server.conf
password_secret = (pwgen -N 1 -s 96 执行结果)
root_password_sha2 = (echo -n yourpassword | sha256sum 执行结果)

(4)设置graylog集群的一个节点为master,登陆10.0.0.11修改配置文件:

vim /etc/graylog/server/server.conf
is_master = true

(5)在各个主机上启动graylog服务:

sudo systemctl start graylog-server.service

(6)查看以下9000端口的服务是否已经启动:

[root@graylog ~]# netstat -nlptu|grep 9000
tcp6 0 0 :::9000 :::* LISTEN 20970/java

(7)登陆网站显示如下:

http://10.0.0.11:9000/ 10.0.0.11:9000


4、通过haproxy或者nginx访问graylog

(1)haproxy 1.6配置:

frontend graylog_http
bind *:80
option forwardfor
http-request add-header X-Forwarded-Host %[req.hdr(host)]
http-request add-header X-Forwarded-Server %[req.hdr(host)]
http-request add-header X-Forwarded-Port %[dst_port]
acl is_graylog hdr_dom(host) -i -m str graylog.server.local
use_backend graylog

backend graylog
description The Graylog Web backend.
balance roundrobin
option httpchk HEAD /api/system/lbstatus
http-request set-header X-Graylog-Server-URL http://graylog.server.local/
server graylog1 10.0.0.11:9000 maxconn 20 check
server graylog2 10.0.0.12:9000 maxconn 20 check
server graylog3 10.0.0.13:9000 maxconn 20 check

(2)nginx配置:

server
{
listen 80 default_server;
listen [::]:80 default_server ipv6only=on;
server_name graylog.server.local;

location / {
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Graylog-Server-URL http://$server_name/;
proxy_pass http://10.0.0.11:9000;
}
}

参考文章:docs.graylog.org/en/2.5



以上是关于搭建Graylog2集群(基于ElasticSearch的日志收集分析平台)的主要内容,如果未能解决你的问题,请参考以下文章

Graylog2实现Docker容器日志收集

(十七)从零开始搭建k8s集群——使用KubeSphere管理平台搭建一个kibana管理平台

ElasticSearch集群日志限制问题

Docker——基于Docker搭建MongoDB分片集群

Redis Cluster基于Docker的集群搭建

minio笔记3--基于k8s搭建minio集群