log4j2中的MDC分析
Posted 5ycode
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了log4j2中的MDC分析相关的知识,希望对你有一定的参考价值。
背景
我们项目中使用的log4j2日志框架,在日志输出的时候,我们有个需求,需要将我们全局唯一的流程流水id打印到log4j的日志文件中。以前一直在用log4j2,也没有好好的了解下其中的原理,这次探索下。
SLF4j
我们都知道,slf4j是门面,log4j2是一种具体的实现。我们先看官网 slf4j的官网SLF4J 全称 Simple Logging Facade for Java 。主要是给java日志访问提供了一个标准,规范的API接口。具体实现由不同的日志框架实现,比如log4j2,logback。
我们看下官网上的实现:
通过上图,我们可以看到应用在使用SLF4j的时候,可以随意切换日志实现与输出。这也是阿里规约里为什么强调日志打印一定要用SLF4j,剥离了技术实现,替换日志框架不用再改代码。
源码追踪
依赖包
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-slf4j-impl</artifactId>
</dependency>
<!-- log4j2 无锁并发 -->
<dependency>
<groupId>com.lmax</groupId>
<artifactId>disruptor</artifactId>
<version>${disruptor.version}</version>
</dependency>
<dependency>
<groupId>ch.qos.logback</groupId>
<artifactId>logback-classic</artifactId>
<version>1.2.5</version>
</dependency>
log4j.xml 通过pattern的表达式将MDC中的内容写入到日志
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE xml>
<!-- Log4j 2.DSConfig 配置文件。每60秒自动检查和应用配置文件的更新; -->
<Configuration status="warn" monitorInterval="60" strict="true">
<Properties>
<Property name="PROJECT_NAME">other</Property>
<Property name="LOG_HOME">logs/${PROJECT_NAME}</Property>
<Property name="COMMON_LEVEL">info</Property>
//重点在这里
<Property name="LOG_PATTERN">%d{yyyy-MM-dd HH:mm:ss,SSS} [%level]-[Thread:%t]-[%logger{36}]-lsh:%X{lsh}-feignId:%X{feignId}-token:%X{token} - %msg%n</Property>
<Property name="LEVEL_ERROR">info</Property>
</Properties>
<Appenders>
<!-- 输出到控制台-->
<Console name="console" target="SYSTEM_OUT" follow="true">
<PatternLayout charset="utf-8" pattern="${LOG_PATTERN}" />
</Console>
<RollingFile name="debugAppender" fileName="${LOG_HOME}/debug.log"
filePattern="${LOG_HOME}/$${date:yyyy-MM}/debug-%d{MM-dd}.%i.log">
<filters>
<ThresholdFilter level="error" onMatch="DENY" onMismatch="NEUTRAL" />
<ThresholdFilter level="info" onMatch="DENY" onMismatch="NEUTRAL" />
<ThresholdFilter level="warn" onMatch="DENY" onMismatch="NEUTRAL" />
<ThresholdFilter level="debug" onMatch="ACCEPT" onMismatch="DENY" />
</filters>
<PatternLayout charset="UTF-8">
<pattern>${LOG_PATTERN}</pattern>
</PatternLayout>
<Policies>
<OnStartupTriggeringPolicy />
<TimeBasedTriggeringPolicy />
<!-- 封存文件的大小 -->
<SizeBasedTriggeringPolicy size="100 MB" />
</Policies>
<!-- 封存文件的序号的最大值。(超过最大值时,将有文件被删除)默认7 -->
<DefaultRolloverStrategy max="500">
<Delete basePath="${LOG_HOME}" maxDepth="500">
<IfFileName glob="*/*/debug-*.log">
<IfLastModified age="15d" />
</IfFileName>
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
<RollingFile name="infoAppender" fileName="${LOG_HOME}/info.log"
filePattern="${LOG_HOME}/$${date:yyyy-MM}/info-%d{MM-dd}.%i.log">
<filters>
<ThresholdFilter level="error" onMatch="DENY" onMismatch="NEUTRAL" />
<ThresholdFilter level="warn" onMatch="DENY" onMismatch="NEUTRAL" />
<ThresholdFilter level="info" onMatch="ACCEPT" onMismatch="DENY" />
</filters>
<PatternLayout charset="UTF-8">
<pattern>${LOG_PATTERN}</pattern>
</PatternLayout>
<Policies>
<OnStartupTriggeringPolicy />
<TimeBasedTriggeringPolicy />
<SizeBasedTriggeringPolicy size="100 MB" />
</Policies>
<DefaultRolloverStrategy max="500">
<Delete basePath="${LOG_HOME}" maxDepth="500">
<IfFileName glob="*/*/info-*.log">
<IfLastModified age="15d" />
</IfFileName>
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
<RollingFile name="warnAppender" fileName="${LOG_HOME}/warn.log"
filePattern="${LOG_HOME}/$${date:yyyy-MM}/warn-%d{MM-dd}.%i.log">
<filters>
<ThresholdFilter level="error" onMatch="DENY" onMismatch="NEUTRAL" />
<ThresholdFilter level="warn" onMatch="ACCEPT" onMismatch="DENY" />
</filters>
<PatternLayout charset="UTF-8">
<pattern>${LOG_PATTERN}</pattern>
</PatternLayout>
<Policies>
<OnStartupTriggeringPolicy />
<TimeBasedTriggeringPolicy />
<SizeBasedTriggeringPolicy size="100 MB" />
</Policies>
<DefaultRolloverStrategy max="500">
<Delete basePath="${LOG_HOME}" maxDepth="500">
<IfFileName glob="*/*/warn-*.log">
<IfLastModified age="15d" />
</IfFileName>
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
<RollingFile name="errorAppender" fileName="${LOG_HOME}/error.log"
filePattern="${LOG_HOME}/$${date:yyyy-MM}/error-%d{MM-dd}.%i.log">
<filters>
<ThresholdFilter level="error" onMatch="ACCEPT" onMismatch="DENY" />
</filters>
<PatternLayout charset="UTF-8">
<pattern>${LOG_PATTERN}</pattern>
</PatternLayout>
<Policies>
<OnStartupTriggeringPolicy />
<TimeBasedTriggeringPolicy />
<SizeBasedTriggeringPolicy size="100 MB" />
</Policies>
<DefaultRolloverStrategy max="500">
<Delete basePath="${LOG_HOME}" maxDepth="500">
<IfFileName glob="*/*/error-*.log">
<IfLastModified age="15d" />
</IfFileName>
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
</Appenders>
<Loggers>
<AsyncRoot level="${COMMON_LEVEL}" includeLocation="true">
<AppenderRef ref="console" />
<AppenderRef ref="debugAppender" />
<AppenderRef ref="infoAppender" />
<AppenderRef ref="warnAppender" />
<AppenderRef ref="errorAppender" />
<!-- <appender-ref ref="KafkaAsyncAppender" /> -->
</AsyncRoot>
<!--myibatis log configure -->
<logger name="org.apache.ibatis" level="TRACE" additivity="false">
<AppenderRef ref="debugAppender" />
<AppenderRef ref="infoAppender" />
<AppenderRef ref="warnAppender" />
<AppenderRef ref="errorAppender" />
</logger>
<logger name="java.sql.Connection" level="DEBUG" additivity="false">
<AppenderRef ref="debugAppender" />
<AppenderRef ref="infoAppender" />
<AppenderRef ref="warnAppender" />
<AppenderRef ref="errorAppender" />
</logger>
<logger name="java.sql.Statement" level="DEBUG" additivity="false">
<AppenderRef ref="debugAppender" />
<AppenderRef ref="infoAppender" />
<AppenderRef ref="warnAppender" />
<AppenderRef ref="errorAppender" />
</logger>
<logger name="java.sql.PreparedStatement" level="DEBUG" additivity="false">
<AppenderRef ref="debugAppender" />
<AppenderRef ref="infoAppender" />
<AppenderRef ref="warnAppender" />
<AppenderRef ref="errorAppender" />
</logger>
<logger name="org.springframework.jdbc" level="DEBUG" additivity="false">
<AppenderRef ref="debugAppender" />
<AppenderRef ref="infoAppender" />
<AppenderRef ref="warnAppender" />
<AppenderRef ref="errorAppender" />
</logger>
<logger name="com.alibaba.dubbo" level="ERROR" additivity="false">
<AppenderRef ref="debugAppender" />
<AppenderRef ref="infoAppender" />
<AppenderRef ref="warnAppender" />
<AppenderRef ref="errorAppender" />
<appender-ref ref="KafkaAsyncAppender" />
</logger>
<logger name="org.apache.zookeeper" level="ERROR" additivity="false">
<AppenderRef ref="debugAppender" />
<AppenderRef ref="infoAppender" />
<AppenderRef ref="warnAppender" />
<AppenderRef ref="errorAppender" />
<!-- <appender-ref ref="KafkaAsyncAppender" />-->
</logger>
<logger name="com.alibaba.rocketmq" level="ERROR" additivity="false">
<AppenderRef ref="debugAppender" />
<AppenderRef ref="infoAppender" />
<AppenderRef ref="warnAppender" />
<AppenderRef ref="errorAppender" />
</logger>
<!--myibatis log configure -->
<logger name="org.apache.ibatis" level="TRACE"/>
<logger name="java.sql.Connection" level="DEBUG"/>
<logger name="java.sql.Statement" level="DEBUG"/>
<logger name="java.sql.PreparedStatement" level="DEBUG"/>
<logger name="com.onecard.loan.mapper" level="DEBUG"/>
<logger name="com.onecard" level="DEBUG"/>
<logger name="org.springframework.jdbc" level="DEBUG"/>
<logger name="com.alibaba.dubbo" level="ERROR"/>
<logger name="org.apache.zookeeper" level="ERROR"/>
<logger name="com.alibaba.rocketmq" level="ERROR"/>
</Loggers>
</Configuration>
public class Log4jMdcTest {
private Logger logger = LoggerFactory.getLogger(Log4jMdcTest.class);
@Test
public void test(){
MDC.put("lsh", UUID.randomUUID().toString());
logger.info("test info level ");
}
}
我们通过LoggerFactory.getLogger,一步步的追踪
LoggerFactory.findPossibleStaticLoggerBinderPathSet(){
ClassLoader loggerFactoryClassLoader = LoggerFactory.class.getClassLoader();
Enumeration<URL> paths;
if (loggerFactoryClassLoader == null) {
paths = ClassLoader.getSystemResources(STATIC_LOGGER_BINDER_PATH);
} else {
paths = loggerFactoryClassLoader.getResources(STATIC_LOGGER_BINDER_PATH);
}
while (paths.hasMoreElements()) {
URL path = paths.nextElement();
staticLoggerBinderPathSet.add(path);
}
}
我们发现会通过classloader找到所有的org.slf4j.impl.StaticLoggerBinder,看下图。StaticLoggerBinder 是所有实现slf4j接口的日志框架都实现的,且路径固定。
再看日志输出,虽然引入了两个实现jar,但是最终只绑定了一个,按照pom中的引入顺序绑定的
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/Users/yxk/javaServer/jar/repository/org/apache/logging/log4j/log4j-slf4j-impl/2.13.3/log4j-slf4j-impl-2.13.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/Users/yxk/javaServer/jar/repository/ch/qos/logback/logback-classic/1.2.5/logback-classic-1.2.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
2021-07-30 09:25:19,210 [INFO]-[Thread:main]-[com.yxkong.other.Log4jMdcTest]-lsh:8dba4b78-d967-4a2f-ade9-8ec7769ecda6-feignId:-token: - test info level
我们继续追踪StaticLoggerBinder,这里是一个单例,具体的实现是Log4jLoggerFactory
public final class StaticLoggerBinder implements LoggerFactoryBinder {
//单例
private static final StaticLoggerBinder SINGLETON = new StaticLoggerBinder();
private StaticLoggerBinder() {
loggerFactory = new Log4jLoggerFactory();
}
}
我们再看下MDC
public class MDC {
private static MDCAdapter bwCompatibleGetMDCAdapterFromBinder() throws NoClassDefFoundError {
try {
//因为log4j2中没有getSingleton()方法,所以一定走catch
return StaticMDCBinder.getSingleton().getMDCA();
} catch (NoSuchMethodError nsme) {
// binding is probably a version of SLF4J older than 1.7.14
return StaticMDCBinder.SINGLETON.getMDCA();
}
}
static {
try {
mdcAdapter = bwCompatibleGetMDCAdapterFromBinder();
} catch (NoClassDefFoundError ncde) {
mdcAdapter = new NOPMDCAdapter();
}
}
}
public final class StaticMDCBinder {
public MDCAdapter getMDCA() {
return new Log4jMDCAdapter();
}
}
//最终发现实现在
public class Log4jMDCAdapter implements MDCAdapter {
public void put(final String key, final String val) {
ThreadContext.put(key, val);
}
}
//我们再看ThreadContext
public final class ThreadContext {
static {
init();
}
static void init() {
//默认true
useStack = !(managerProps.getBooleanProperty(DISABLE_STACK) || disableAll);
//默认true
boolean useMap = !(managerProps.getBooleanProperty(DISABLE_MAP) || disableAll);
contextStack = new DefaultThreadContextStack(useStack);
if (!useMap) {
contextMap = new NoOpThreadContextMap();
} else {
contextMap = ThreadContextMapFactory.createThreadContextMap();
}
}
}
我们再看下:
ThreadContextMapFactory.createThreadContextMap(){
if (result == null) {
result = createDefaultThreadContextMap();
}
return result;
}
private static ThreadContextMap createDefaultThreadContextMap() {
//最终走的是这里
return new CopyOnWriteSortedArrayThreadContextMap();
}
public CopyOnWriteSortedArrayThreadContextMap() {
this.localMap = createThreadLocalMap();
}
private ThreadLocal<StringMap> createThreadLocalMap() {
//在这里我们也能看到log4j2的MDC也是可以支持多线程的,只不过模式是ThreadLocal
if (inheritableMap) {
return new InheritableThreadLocal<StringMap>() {
@Override
protected StringMap childValue(final StringMap parentValue) {
if (parentValue == null) {
return null;
}
final StringMap stringMap = createStringMap(parentValue);
stringMap.freeze();
return stringMap;
}
};
}
// if not inheritable, return plain ThreadLocal with null as initial value
return new ThreadLocal<>();
}
到此,我们可以看到MDC底层用的是ThreadLocal。我可以通过设置环境变量isThreadContextMapInheritable=true开启支持多线程版本的InheritableThreadLocal
当然除了支持开启多线程版本的ThreadLocal,我们也可以重写ThreadPoolTaskExecutor中的submit拷贝主线程到子线程
public class MdcThreadPoolTaskExecutor extends ThreadPoolTaskExecutor {
private static final long serialVersionUID = 53094863482765933L;
private Logger logger = LoggerFactory.getLogger(MdcThreadPoolTaskExecutor.class);
private final static String LSH = "lsh";
@Override
public <T> Future<T> submit(Callable<T> task) {
Map<String, String> context = MDC.getCopyOfContextMap();
logger.info("----MDC content:{}", context);
return super.submit(() -> {
// 将父线程的MDC内容传给子线程
T result;
if (MapUtils.isNotEmpty(context) && !Strings.isNullOrEmpty(context.get(LSH))) {
MDC.setContextMap(context);
} else {
MDC.put(LSH, "lsh_" + StringUtils.randomUUIDSplit()); //为空设置新值
}
try {
result = task.call();
} finally {
try {
MDC.clear();
} catch (Exception e2) {
logger.warn("mdc clear exception.", e2);
}
}
return result;
});
}
}
MDC最终和业务线程绑定
使用ThreadLocal,生命周期随着业务线程结束而结束
可以通过设置环境变量isThreadContextMapInheritable=true开启支持多线程版本的ThreadLocal(logback默认InheritableThreadLocal)
今天分析就到这,后续分析一下log4j加载配置文件,以及以及变量如何输出。
参考:
http://www.slf4j.org/manual.html
https://logging.apache.org/log4j/2.x/manual/layouts.html
以上是关于log4j2中的MDC分析的主要内容,如果未能解决你的问题,请参考以下文章