日志收集系统架构设计:(flume+zookeeper+kafka+php+mysql )
Posted codebox
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了日志收集系统架构设计:(flume+zookeeper+kafka+php+mysql )相关的知识,希望对你有一定的参考价值。
每天与你分享
IT编程开发 技术干货 架构方案 技术思维导图 设计模式 算法题库
一、安装jdk
二、安装flume
三、安装kafka
1、zookeeper
2、kafka
四、启动测试步骤
五、目录说明
六、日志收集系统设计图
七、大数据参考资料推荐
一、安装jdk -(版本:1.8.0_191)
1.下载:
https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
解压:tar -zxvf /jdk-8u191-linux-x64.tar.gz -C /home/ppgt/local/
2.修改 /etc/profile 增加:
export JAVA_HOME=/home/ppgt/local/jdk1.8.0_191
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
3.检测:
java -version
二、安装flume -(版本:1.8.0)
1.下载:
wget http://mirrors.hust.edu.cn/apache/flume/1.8.0/apache-flume-1.8.0-bin.tar.gz
2.解压安装:
tar -zxvf apache-flume-1.8.0-bin.tar.gz -C /home/ppgt/local/
3.验证安装:
bin/flume-ng version
4.修改配置 conf/:
cp flume-env.sh.template flume-env.sh
vi flume-env.sh
//配置java路径:export JAVA_HOME=/usr/local/jdk/jdk1.8.0_191-amd64
5.添加连接kafka的配置文件:conf/flumetokafka.conf
#配置flume链接kafka
# 定义这个agent中各组件的名字
flume_kafka.sources = exec-sources
flume_kafka.sinks = kafka-sink
flume_kafka.channels = memory-channel
# Describe/configure the source
flume_kafka.sources.exec-sources.type = exec
flume_kafka.sources.exec-sources.command = tail -F /home/ppgt/tmpfile/testlogs/data.log
# Describe the sink
flume_kafka.sinks.kafka-sink.type = org.apache.flume.sink.kafka.KafkaSink
flume_kafka.sinks.kafka-sink.topic = topiclogs01
flume_kafka.sinks.kafka-sink.brokerList = localhost:9092
flume_kafka.sinks.kafka-sink.requiredAcks = 1
flume_kafka.sinks.kafka-sink.batchSize = 20
# Use a channel which buffers events in memory
flume_kafka.channels.memory-channel.type = memory
flume_kafka.channels.memory-channel.capacity = 1000
flume_kafka.channels.memory-channel.transactionCapacity = 100
# Bind the source and sink to the channel
flume_kafka.sources.exec-sources.channels = memory-channel
flume_kafka.sinks.kafka-sink.channel = memory-channel
三、安装kafka
1.安装依赖zookeeper -(版本:3.4.12)
1)下载:
wget http://mirror.bit.edu.cn/apache/zookeeper/stable/zookeeper-3.4.12.tar.gz
2)解压:
tar -zxvf zookeeper-3.4.12.tar.gz -C /home/ppgt/local/
3)修改数据存储目录:
conf/zoo.cfg
cp zoo_sample.cfg zoo.cfg
修改值:dataDir=/home/ppgt/kafka_zk_tmp/tmp/zookeeper
2.安装kafka -(版本:0.9.0.0)
1)下载:
wget https://archive.apache.org/dist/kafka/0.9.0.0/kafka_2.11-0.9.0.0.tgz
2)解压:
tar -zxvf kafka_2.11-0.9.0.0.tgz -C /home/ppgt/local/
3)修改配置文件:config/server.properties
a) zookeeper.connect=localhost:2181 #zk服务地址
b) host.name=localhost #主机名
c) log.dirs=/home/ppgt/kafka_zk_tmp/tmp/kafka-logs#kafka数据的存放地址
d) num.partitions=1 #分区数量
e) listeners=PLAINTEXT://:9092 #kafka监听端口
f) broker_id=0 #唯一标识id
四、启动测试步骤
1.启动zookeeper
bin/zkServer.sh start
2.启动flume
bin/flume-ng agent --conf conf --conf-file conf/flumetokafka.conf --name flume_kafka -Dflume.root.logger=INFO,console
3.启动kafka
//启动kafka服务
bin/kafka-server-start.sh /home/ppgt/local/kafka_2.11-0.9.0.0/config/server.properties
//创建一个topic
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic topiclogs01
//创建一个kafka消费者
php /home/ppgt/www_test_ppgt_admin/syslogs_featrue_v1.0/script/cron/sysLogsConsumerRun.php
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic topiclogs01 --from-beginning
//或者执行php消费者
//生产测试日志
echo '内容' exce >> /home/ppgt/tmpfile/testlogs/data.log
五、目录说明
1./home/ppgt/local/
conf/zoo.cfg #配置文件
conf/flumetokafka.conf #连接kafka的配置文件
config/server.properties #配置文件
conf/flume-env.sh#配置文件
jdk1.8.0_191 #jdk安装目录
apache-flume-1.8.0-bin/#flume安装目录
kafka_2.11-0.9.0.0/ #kafka安装目录
zookeeper-3.4.12/ #zookeeper安装目录
2./home/ppgt/kafka_zk_tmp/tmp/
zookeeper/ #zookeeper数据存储目录
kafka-logs/ #kafka数据的存放地址
六、日志收集系统设计图
七、大数据资料文章推荐
flume:
https://blog.csdn.net/mengfanzhundsc/article/details/81300310?from=singlemessage&isappinstalled=0
https://blog.csdn.net/caodaoxi/article/details/25903867
https://www.cnblogs.com/tonglin0325/p/8963395.html
http://www.wfuyu.com/technology/25331.html
https://blog.csdn.net/Team77/article/details/44154529a
https://blog.csdn.net/wuxintdrh/article/details/79478710
https://blog.csdn.net/l1028386804/article/details/79366155
https://blog.csdn.net/jy02268879/article/details/81024758
http://itindex.net/detail/57323-flume-%E7%9B%91%E6%8E%A7-%E6%97%A5%E6%9C%9F
https://github.com/ypenglyn/locktail/blob/master/locktail_rotate.sh
https://blog.csdn.net/maoyuanming0806/article/details/79391010
https://blog.csdn.net/u010316188/article/details/79905372
https://blog.csdn.net/u011254180/article/details/80000763
linux下flume安装
flume使用taildir收集文件和文件夹
taildirSource多文件监控实时采集
滚动日志监控解决方案:
Flume使用:监控文件实时采集新增数据输出到控制台
Flume之——Flume读取日志数据写入Kafka
flume Source志SpoolDir
flume监控
flume监控参数说明
flume监控指标详解
大数据系列之Flume--几种不同的Sources
kafuka:
https://www.jianshu.com/p/a036405f989c
https://blog.csdn.net/nankiao/article/details/78553635
https://blog.csdn.net/weixin_38750084/article/details/82944759
http://www.cnblogs.com/jun1019/p/6656223.html
https://www.cnblogs.com/jun1019/p/6256514.html
http://www.tianshouzhi.com/api/tutorials/kafka/117
https://www.cnblogs.com/hei12138/p/7805475.html
kafka介绍
kafka的使用和安装
kafka存储机制
kafka配置说明
kafka与zookeeper间的关联
Zookeeper 在 Kafka 中的作用
推荐阅读
区块链技术入门学习分享
php7.0新特性学习
Tinkphp5.0框架博客项目实战
GO语言编程基础学习
以上是关于日志收集系统架构设计:(flume+zookeeper+kafka+php+mysql )的主要内容,如果未能解决你的问题,请参考以下文章