单服务器部署elk

Posted 一念为云

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了单服务器部署elk相关的知识,希望对你有一定的参考价值。

系统组成及数据流向


整个系统的组件及其数据流向:


filebeat => kafka => logstash => elasticsearch => kibana



filebeat:负责日志的采集工作,并负责把数据传输给下一环节kafka,在实际生产中,filbeat部署在被采集的服务器之上

kafka:负责接收并缓存前一环节众多filebeat传输过来的数据,并作为下一环节logstash的数据输入端

logstash:负责与上一环节kafka对接,从kafka读取采集到的数据,并把数据输出给下一环节elasticsearch

elasticsearch:负责接收上一环节logstash传输过来的数据,并存储起来

kafka:负责数据的展示,从上一环节elasticsearch中读取数据,并以web的方式展示出来


部署kafka

下载二进制包

这里选择下载最新版本3.0

链接:​​https://downloads.apache.org/kafka/3.0.0/kafka_2.13-3.0.0.tgz​

配置文件

解压后,在config目录有两个需要注意的配置文件,zookeeper的配置文件zookeeper.properties,kafka的配置文件server.properties

zookeeper.properties,如下:



这里可以修改一下dataDir的值,把zookeeper的数据保存在指定位置,其它保持不变


server.properties,过滤注释行后如下:

broker.id=0

num.network.threads=3

num.io.threads=8

socket.send.buffer.bytes=102400

socket.receive.buffer.bytes=102400

socket.request.max.bytes=104857600

log.dirs=/tmp/kafka-logs

num.partitions=1

num.recovery.threads.per.data.dir=1

offsets.topic.replication.factor=1

transaction.state.log.replication.factor=1

transaction.state.log.min.isr=1

log.retention.hours=168

log.segment.bytes=1073741824

log.retention.check.interval.ms=300000

zookeeper.connect=localhost:2181

zookeeper.connection.timeout.ms=18000

group.initial.rebalance.delay.ms=0


这里修改下zookeeper.connect=localhost:2181,本地节点,可以把log.dirs指定到别的位置
,其它的参数这里不做细究

启动并测试kafka

启动zookeeper

在kafka根目录下,执行

bin/zookeeper-server-start.sh -daemon config/zookeeper.properties


启动kafka

在kafka根目录下,执行

bin/kafka-server-start.sh -daemon config/server.properties

执行 ss -ant|grep -E "2181|9092" 确保服务都已经起来

测试kafka

创建一个名字为test的topic:

bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --topic test --partitions 1 --replication-factor 1

如果返回:Created topic test.表明topic创建成功

启动一个producer:

bin/kafka-console-producer.sh --bootstrap-server localhost:9092 --topic test

启动一个consumer:

bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test

在producer端,随意输入一个测试字符串,确定在consumer端能接收到,则证明kafka已经运行起来。


部署filebeat

配置仓库

在 ​​/etc/zypp/repos.d/​​ 目录下创建一个repo文件:


[elasticsearch]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md


接下来,可以使用yum安装

yum install filebeat -y

修改配置文件

编辑filebeat的配置文件 /etc/filebeat/filebeat.yml

找到 filebeat.inputs 这一行,做如下修改

1,把下面的enabled:false改为true

2,把 /var/log/*.log 改为 /root/test.log (为了方便测试)

3,注释Elasticsearch Output下,Logstash Output下的配置

4,添加如下内容,增加一个kafka的输出通道


# ----------------------------- Kafka Output -----------------------------------
output.kafka:
# initial brokers for reading cluster metadata
hosts: ["localhost:9092"]

# message topic selection + partitioning
topic: "test"
partition.round_robin:
reachable_only: false

required_acks: 1
compression: gzip
max_message_bytes: 1000000


测试filebeat

启动filebeat服务

systemctl status filebeat.service

我们在家目录下创建一个test.log,并向其中写入一个字符串,模拟日志生成的场景

echo hello world! >> test.log

接着,我们在之前打开的consumer端,可以看见接收到的数据


{"@timestamp":"2021-09-29T05:05:57.574Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.15.0"},"log":{"offset":13,"file":{"path":"/root/test.log"}},"message":"hello world!","input":{"type":"log"},"ecs":{"version":"1.11.0"},"host":{"containerized":false,"name":"server01","ip":["192.168.122.101","fe80::2c2b:dcc0:82de:41f4","172.18.0.1","172.19.0.1","172.17.0.1","10.1.0.1","172.20.1.1"],"mac":["52:54:00:99:b0:d1","02:42:82:93:df:2d","02:42:6e:82:1e:16","02:42:79:a0:77:33","02:42:b3:3d:52:7e","02:42:5f:8e:b0:4f"],"hostname":"server01","architecture":"x86_64","os":{"name":"CentOS Linux","kernel":"4.18.0-305.3.1.el8_4.x86_64","type":"linux","platform":"centos","version":"8","family":"redhat"},"id":"4c0ced58299848e2bc56f8bf0eb1209b"},"agent":{"ephemeral_id":"7fa60e6d-2d56-4b96-9192-948527a5024c","id":"d027bd76-6050-423a-93f6-049a15002a20","name":"server01","type":"filebeat","version":"7.15.0","hostname":"server01"}}

此时,意味着,filebeat采集到的数据已经成功的传入到kafka中。


部署logstash


安装logstash

在之前配置的仓库基础之上,可以直接yum安装

yum install logstash


配置logstash

在 /etc/logstash/conf.d 目录下新建一个test.conf文件,内容如下


input {
kafka {
bootstrap_servers => ["localhost:9092"]
group_id => "logstash"
topics => ["test"]
consumer_threads => 1
decorate_events => "extended"
add_field => {"from" => "test"}
codec => "json"
}
}

output {
stdout {
}
}


测试logstash

前台运行

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d

稍等一会儿,当屏幕输出如下内容时,表明启动成功


接着,我们在终端向test.log写入数据

echo this is a test! >> test.log

然后,我们会在logstash终端看到如下内容


{
"log" => {
"offset" => 42,
"file" => {
"path" => "/root/test.log"
}
},
"@version" => "1",
"message" => "this is a test!",
"from" => "test",
"@timestamp" => 2021-09-29T05:28:37.752Z,
"host" => {
"hostname" => "server01",
"name" => "server01",
"architecture" => "x86_64",
"mac" => [
[0] "52:54:00:99:b0:d1",
[1] "02:42:82:93:df:2d",
[2] "02:42:6e:82:1e:16",
[3] "02:42:79:a0:77:33",
[4] "02:42:b3:3d:52:7e",
[5] "02:42:5f:8e:b0:4f"
],
"os" => {
"platform" => "centos",
"type" => "linux",
"kernel" => "4.18.0-305.3.1.el8_4.x86_64",
"version" => "8",
"name" => "CentOS Linux",
"family" => "redhat"
},
"id" => "4c0ced58299848e2bc56f8bf0eb1209b",
"ip" => [
[0] "192.168.122.101",
[1] "fe80::2c2b:dcc0:82de:41f4",
[2] "172.18.0.1",
[3] "172.19.0.1",
[4] "172.17.0.1",
[5] "10.1.0.1",
[6] "172.20.1.1"
],
"containerized" => false
},
"agent" => {
"type" => "filebeat",
"version" => "7.15.0",
"hostname" => "server01",
"name" => "server01",
"id" => "d027bd76-6050-423a-93f6-049a15002a20",
"ephemeral_id" => "7fa60e6d-2d56-4b96-9192-948527a5024c"
},
"input" => {
"type" => "log"
},
"ecs" => {
"version" => "1.11.0"
}
}


这表明,filebeat采集道德数据已经通过kafka传送到了logstash这一环节。

接下来,继续进行下一步,把采集到的数据送入elasticsearch。


部署elasticsearch

安装elasticsearch

在前面配置的仓库的基础上,用yum安装elasticsearch

yum install elasticsearch -y


修改配置文件

编辑 /etc/elasticsearch/elasticsearch.yml


path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.cors.enabled: true
http.cors.allow-origin: "*"
cluster.initial_master_nodes: ["server01"]


network.host:0.0.0.0

是为了从外部访问,实际生产中这样是不安全的,仅为测试方便(从外部访问虚拟机所在环境)


http.cors.enabled: true

http.cors.allow-origin: "*"

上面的配置是为了待会儿要用es-head连接elasticsearch


cluster.initial_master_nodes: ["server01"]

这个配置为当前主机的名字


启动elasticsearch服务

systemctl start elasticsearch.service


安装elasticsearch-head

为了方便查看elasticsearch的数据,我们安装该组件

安装的方式很多,这里使用docdker安装

docker pull alivv/elasticsearch-head

docker run -d --name es-head -p 9100:9100 alivv/elasticsearch-head


使用elasticsearch-head

此时从外部访问es-head,如下所示,表明elasticsearch-head已经成功连接elasticsearch



我们点击“索引”,点击“新建索引”

这样,就在elasticsearch中新建了一个名为test的索引


接着,修改下 /etc/logstash/conf.d/test.conf,把output块添加一个elasticsearch输出



output {
stdout {
}
elasticsearch {
hosts => "localhost:9200"
index => "test"
}
}


重新运行logstash

前台停止,重新执行

/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d

在终端执行

echo this is a test! >> test.log

此时,我们可以在consumer终端,logstash终端,都可以看见采集到的数据

我们继续使用elasticsearch-head来查看

点击“数据浏览”,点击下面的“test”索引


点击最上面一条数据,弹出一个窗口,以json格式展示了数据及其相关信息。



到这里,表明之前采集到的数据经过一系列环节终于到了elasticsearch了。


部署kibana

安装kibana

继续在之前配置的仓库基础之上,采用yum安装

yum install kibana -y


配置kibana

修改配置文件 /etc/kibana/kibana.yml

server.port: 5601

server.host: "0.0.0.0"


启动kibana

systemctl start kibana.service

启动比较慢,稍等一会儿


访问kibana

浏览器访问部署服务器ip+端口5601

​http://192.168.122.101:5601​

依次访问 Observability => logs => Stream



至此,数据已经在elk的各个环节从头至尾成功传递,elk平台部署完成。















以上是关于单服务器部署elk的主要内容,如果未能解决你的问题,请参考以下文章

ELK===》ELK介绍Elasticsearch单节点部署Elasticsearch集群部署

ELK日志管理平台部署简介

在k8s集群部署ELK

运维工程师系统日志收集ELK之Elasticsearch服务部署

ELK 集群搭建总结

日志分析系统ELK之Elasticsearch