hive的log4j2.properties滚动日志设置时间显示格式时日志
Posted 江南独孤客
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了hive的log4j2.properties滚动日志设置时间显示格式时日志相关的知识,希望对你有一定的参考价值。
日志相关配置如下:
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
status = WARN
name = HiveLog4j2
packages = org.apache.hadoop.hive.ql.log
# list of properties
property.hive.log.level = INFO
property.hive.root.logger = DRFA
property.hive.log.dir = /var/log/udp/2.0.0.0/hive/
property.hive.log.file = hive.log
property.hive.perflogger.log.level = INFO
# list of all appenders
appenders = console, DRFA
# console appender
appender.console.type = Console
appender.console.name = console
appender.console.target = SYSTEM_ERR
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %dISO8601 %5p [%t] %c2: %m%n
# daily rolling file appender
appender.DRFA.type = RollingRandomAccessFile
appender.DRFA.name = DRFA
appender.DRFA.fileName = $sys:hive.log.dir/$sys:hive.log.file
# Use %pid in the filePattern to append <process-id>@<host-name> to the filename if you want separate log files for different CLI session
appender.DRFA.filePattern = $sys:hive.log.dir/$sys:hive.log.file.%dyyyy-MM-dd-%i.log
appender.DRFA.layout.type = PatternLayout
appender.DRFA.layout.pattern = %dISO8601 %5p [%t] %c2: %m%n
appender.DRFA.policies.type = Policies
appender.DRFA.policies.time.type = TimeBasedTriggeringPolicy
appender.DRFA.policies.time.interval = 1
appender.DRFA.policies.time.modulate = true
appender.DRFA.strategy.type = DefaultRolloverStrategy
appender.DRFA.strategy.max = 1
appender.DRFA.policies.size.type = SizeBasedTriggeringPolicy
appender.DRFA.policies.size.size = 1MB
# Deletes logs IfLastModified date is greater than number of days
# automatically delete hive log
appender.DRFA.strategy.action.type = Delete
appender.DRFA.strategy.action.basePath = $sys:hive.log.dir
appender.DRFA.strategy.action.condition.type = IfFileName
appender.DRFA.strategy.action.condition.regex = hive*.*log.*
appender.DRFA.strategy.action.condition.nested_condition.type = IfAny
# Deletes logs based on total accumulated size, keeping the most recent
#appender.DRFA.strategy.action.condition.nested_condition.fileSize.type = IfAccumulatedFileSize
#appender.DRFA.strategy.action.condition.nested_condition.fileSize.exceeds = 60GB
# Deletes logs IfLastModified date is greater than number of days
appender.DRFA.strategy.action.condition.nested_condition.lastMod.type = IfLastModified
appender.DRFA.strategy.action.condition.nested_condition.lastMod.age = 30D
# list of all loggers
loggers = NioserverCnxn, ClientCnxnSocketNIO, DataNucleus, Datastore, JPOX, PerfLogger, AmazonAws, ApacheHttp
logger.NIOServerCnxn.name = org.apache.zookeeper.server.NIOServerCnxn
logger.NIOServerCnxn.level = WARN
logger.ClientCnxnSocketNIO.name = org.apache.zookeeper.ClientCnxnSocketNIO
logger.ClientCnxnSocketNIO.level = WARN
logger.DataNucleus.name = DataNucleus
logger.DataNucleus.level = ERROR
logger.Datastore.name = Datastore
logger.Datastore.level = ERROR
logger.JPOX.name = JPOX
logger.JPOX.level = ERROR
logger.AmazonAws.name=com.amazonaws
logger.AmazonAws.level = INFO
logger.ApacheHttp.name=org.apache.http
logger.ApacheHttp.level = INFO
logger.PerfLogger.name = org.apache.hadoop.hive.ql.log.PerfLogger
logger.PerfLogger.level = $sys:hive.perflogger.log.level
# root logger
rootLogger.level = $sys:hive.log.level
rootLogger.appenderRefs = root
rootLogger.appenderRef.root.ref = $sys:hive.root.logger
appender.DRFA.type = RollingRandomAccessFile
appender.DRFA.strategy.type = DefaultRolloverStrategy
appender.DRFA.strategy.max = 3
appender.DRFA.policies.type = Policies
appender.DRFA.policies.size.type = SizeBasedTriggeringPolicy
appender.DRFA.policies.size.size = 2MB
appender.DRFA.filePattern = $sys:hive.log.dir/$sys:hive.log.file.%dyyyy-MM-dd-%i.log.zip
日志的滚动结果如下:
-rw------- 1 omm wheel 84507 Dec 19 16:17 hive.log.2022-05-09-1.log.zip
-rw------- 1 omm wheel 93363 Dec 19 16:29 hive.log.2022-05-09-2.log.zip
-rw------- 1 omm wheel 84507 Dec 19 16:35 hive.log.2022-05-09-3.log.zip
可以看到,在时间相同的情况下日志能够正常滚动,但是时间一旦发生变化日志的编号 i 值就会从1开始重新累加。
以上是关于hive的log4j2.properties滚动日志设置时间显示格式时日志的主要内容,如果未能解决你的问题,请参考以下文章
1.30.Flink SQL案例将Kafka数据写入hive
将 log4j2.properties 配置到 Spring Boot 中的问题(使用 gradle)
Elasticsearch 可以热重载 log4j2.properties 吗?
log4j2.properties 无法从 Docker Environment 或 Kubernetes Env 读取变量