巧用rsyslog收集多套日志并做单套日志的过滤分离
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了巧用rsyslog收集多套日志并做单套日志的过滤分离相关的知识,希望对你有一定的参考价值。
日志是supervisor打出来的python日志,且把不同格式的日志打印到了同一批文件里,需求是把带post和ERROR关键字的日志分离,并进入两个不同kafka的topic队列,目前的情况是rsyslog已经收集了nginx的访问日志,不能相互影响,就是说不能直接用if判断做分离,因为可能会日志混掉。要收集的日志格式如下:
ERROR:root:requeue {"withRefresh": false, "localPath": "/data1/ms/cache/file_store_location/n.fdaimg.cn/translate/20170219/oobE-fyarref6029227.jpg?43", "remotePath": "translate/20170219/oobE-fyarref6029227.jpg?43"} INFO:root:2017-02-22T11:53:11.395165, {"withRefresh": false, "localPath": "/data1/ms/cache/file_store_location/n.adfaimg.cn/w/20170222/aue--fyarref6523250.jpeg", "remotePath": "w/20170222/aue--fyarref6523250.jpeg"} INFO:root:post /data1/ms/cache/file_store_location/n.fsdaimg.cn/w/20170222/aue--fyarref6523250.jpeg to w/20170222/aue--fyarref6523250.jpeg took 112.954854965 ms...
操作做之前配置的rsyslog的规则如下:
module(load="imfile") module(load="omkafka") $PreserveFQDN on main_queue( queue.workerthreads="10" # threads to work on the queue queue.dequeueBatchSize="1000" # max number of messages to process at once queue.size="50000" # max queue size ) ######################### nginx access ##################### $template nginxlog,"xd172\.16\.11\.44`%msg%" ruleset(name="nginxlog") { action( broker=["10.13.88.190:9092","10.13.88.191:9092","10.13.88.192:9092","10.13.88.193:9092"] type="omkafka" topic="cms-nimg-s3" template="nginxlog" partitions.auto="on" ) } input(type="imfile" File="/data1/ms/comos/logs/access_s3.log" Tag="" ruleset="nginxlog" freshStartTail="on" reopenOnTruncate="on" )
当时想直接用if判断做分离,后来发现所有的日志都会进if判断,完全可能把日志混淆,后来测试发现,ruleset里竟然可以嵌套if判断,神奇的rsyslog,解决了一个大问题,配置如下:
module(load="imfile") module(load="omkafka") $PreserveFQDN on main_queue( queue.workerthreads="10" # threads to work on the queue queue.dequeueBatchSize="1000" # max number of messages to process at once queue.size="50000" # max queue size ) ######################### nginx access ##################### $template nginxlog,"xd172\.16\.11\.44`%msg%" ruleset(name="nginxlog") { action( broker=["10.13.88.190:9092","10.13.88.191:9092","10.13.88.192:9092","10.13.88.193:9092"] type="omkafka" topic="cms-nimg-s3" template="nginxlog" partitions.auto="on" ) } input(type="imfile" File="/data1/ms/comos/logs/access_s3.log" Tag="" ruleset="nginxlog" freshStartTail="on" reopenOnTruncate="on" ) ####################### python s3 post error################################ $template s3post,"xd172\.16\.11\.43 %msg%" ruleset(name="s3post") { if ( $msg contains "post" ) then { action( broker=["10.13.88.190:9092","10.13.88.191:9092","10.13.88.192:9092","10.13.88.193:9092"] type="omkafka" topic="cms-nimg-s3-post" template="s3post" partitions.auto="on" ) } if ( $msg contains "ERROR" ) then { action( broker=["10.13.88.190:9092","10.13.88.191:9092","10.13.88.192:9092","10.13.88.193:9092"] type="omkafka" topic="cms-nimg-s3-post-error" template="s3post" partitions.auto="on" ) } } input(type="imfile" File="/data1/ms/comos/logs/s3q_daemon_0.err" Tag="" ruleset="s3post" freshStartTail="on" reopenOnTruncate="on" ) input(type="imfile" File="/data1/ms/comos/logs/s3q_daemon_1.err" Tag="" ruleset="s3post" freshStartTail="on" reopenOnTruncate="on" ) input(type="imfile" File="/data1/ms/comos/logs/s3q_daemon_2.err" Tag="" ruleset="s3post" freshStartTail="on" reopenOnTruncate="on" )
本文出自 “奔跑的linux” 博客,请务必保留此出处http://benpaozhe.blog.51cto.com/10239098/1903720
以上是关于巧用rsyslog收集多套日志并做单套日志的过滤分离的主要内容,如果未能解决你的问题,请参考以下文章