flume:spooldir采集日志,kafka输出的配置问题

Posted 小溪(潺潺流水,润泽千里)

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了flume:spooldir采集日志,kafka输出的配置问题相关的知识,希望对你有一定的参考价值。

flume配置:

#DBFile
DBFile.sources = sources1  
DBFile.sinks = sinks1  
DBFile.channels = channels1  
  
# DBFile-DB-Source 
DBFile.sources.sources1.type = spooldir
DBFile.sources.sources1.spoolDir =/var/log/apache/flumeSpool//db
DBFile.sources.sources1.inputCharset=utf-8
  
# DBFile-Sink  
DBFile.sinks.sinks1.type = org.apache.flume.sink.kafka.KafkaSink  
DBFile.sinks.sinks1.topic = DBFile
DBFile.sinks.sinks1.brokerList = hdp01:6667,hdp02:6667,hdp07:6667
DBFile.sinks.sinks1.requiredAcks = 1  
DBFile.sinks.sinks1.batchSize = 2000  
  
# DBFile-Channel
DBFile.channels.channels1.type = memory
DBFile.channels.channels1.capacity = 10000
DBFile.channels.channels1.transactionCapacity = 1000

# DBFile-Source And Sink to the channel
DBFile.sources.sources1.channels = channels1
DBFile.sinks.sinks1.channel = channels1

 

       故障现象:第一次上传文件时,flume能很快处理文件,后面上传还是显示文件未处理。如果重启flume服务,又能立刻处理。

经测试,问题的原因在这个配置上:DBFile.sinks.sinks1.requiredAcks = -1

      requiredAcks 的官方解释:How many replicas must acknowledge a message before its considered successfully written. Accepted values are 0 (Never wait for acknowledgement),

1 (wait for leader only), -1 (wait for all replicas) Set this to -1 to avoid data loss in some cases of leader failure.

    把这个值改为1就好了。

以上是关于flume:spooldir采集日志,kafka输出的配置问题的主要内容,如果未能解决你的问题,请参考以下文章

Flume + kafka + HDFS构建日志采集系统

flume Source志SpoolDir

Flume+Kafka+Zookeeper搭建大数据日志采集框架

Flume+Kafka双剑合璧玩转大数据平台日志采集

非kerberos环境下,flume采集日志到kafka

日志采集Flume配置