Java 输出 JSON 日志

Posted isea533

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Java 输出 JSON 日志相关的知识,希望对你有一定的参考价值。

为了在服务中使用 JSON 格式的日志,搭建 fluentd 和 logstash 测试环境,整理相关配置的详细资料。

1. fluentd 测试环境

文档:https://docs.fluentd.org/v/0.12/articles/docker-logging-efk-compose

docker compose:

version: "3"

services:
  elasticsearch:
    image: elasticsearch:8.4.3
    container_name: elasticsearch
    restart: always
    environment:
      - cluster.name=elasticsearch
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - xpack.security.enabled=false
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 262144
        hard: 262144
    volumes:
      - elasticsearch:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
      - 9600:9600 # required for Performance Analyzer
    networks:
      - efk-net
  kibana:
    image: kibana:8.4.3
    container_name: kibana
    restart: always
    ports:
      - 5601:5601
    expose:
      - "5601"
    environment:
      ELASTICSEARCH_URL: http://elasticsearch:9200
      ELASTICSEARCH_HOSTS: http://elasticsearch:9200
    networks:
      - efk-net
  fluentd:
    build: ./fluentd
    volumes:
      - ./fluentd/conf:/fluentd/etc
    links:
      - "elasticsearch"
    restart: always
    container_name: fluentd
    ports:
      - "24224:24224"
      - "24224:24224/udp"
    networks:
      - efk-net
  web:
    image: httpd
    container_name: web
    ports:
      - "80:80"
    links:
      - fluentd
    networks:
      - efk-net
    logging:
      driver: "fluentd"
      options:
        fluentd-address: localhost:24224
        tag: httpd.access
volumes:
  elasticsearch:

networks:
  efk-net:

./fluentd/conf/fluent.conf:

# fluentd/conf/fluent.conf
<source>
  @type forward
  port 24224
  bind 0.0.0.0
</source>
<match *.**>
  @type copy
  <store>
    @type elasticsearch
    host elasticsearch
    port 9200
    logstash_format true
    logstash_prefix fluentd
    logstash_dateformat %Y%m%d
    include_tag_key true
    type_name access_log
    tag_key @log_name
    flush_interval 1s
  </store>
  <store>
    @type stdout
  </store>
</match>

2. logstash 测试环境

文档:https://www.elastic.co/guide/en/logstash/current/introduction.html

docker compose:

version: "3"

services:
  elasticsearch:
    image: elasticsearch:8.4.3
    container_name: elasticsearch
    restart: always
    environment:
      - cluster.name=elasticsearch
      - discovery.type=single-node
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - xpack.security.enabled=false
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 262144
        hard: 262144
    volumes:
      - elasticsearch:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
      - 9600:9600 # required for Performance Analyzer
    networks:
      - efk-net
  kibana:
    image: kibana:8.4.3
    container_name: kibana
    restart: always
    ports:
      - 5601:5601
    expose:
      - "5601"
    environment:
      ELASTICSEARCH_URL: http://elasticsearch:9200
      ELASTICSEARCH_HOSTS: http://elasticsearch:9200
    networks:
      - efk-net
  logstash:
    image: logstash:8.5.0
    volumes:
      - ./logstash/:/usr/share/logstash/pipeline/
    links:
      - "elasticsearch"
    restart: always
    container_name: logstash
    ports:
      - "5000:5000"
    networks:
      - efk-net
  web:
    image: httpd
    container_name: web
    ports:
      - "80:80"
    links:
      - logstash
    networks:
      - efk-net
    logging:
      driver: "syslog"
      options:
        syslog-address: "tcp://192.168.0.112:5000"
volumes:
  elasticsearch:

networks:
  efk-net:

./logstash/logstash.conf:

input 
    tcp 
        port => 5000
        codec => json_lines
    


output 
  elasticsearch 
     hosts => ["elasticsearch:9200"]
     index => "applog"
  

3. Java代码集成 logback

logstash

文档: https://github.com/logfellow/logstash-logback-encoder

<?xml version="1.0" encoding="UTF-8"?>
<configuration>
    <appender name="stash" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
        <destination>127.0.0.1:4560</destination>

        <!-- encoder is required -->
        <encoder class="net.logstash.logback.encoder.LogstashEncoder" />
    </appender>

    <root level="DEBUG">
        <appender-ref ref="stash" />
    </root>
</configuration>

fluentd

文档:https://github.com/sndyuk/logback-more-appenders

示例:logback-appenders-fluentd.xml

<appender name="FLUENT_SYNC"
            class="ch.qos.logback.more.appenders.DataFluentAppender">

    <!-- Tag for Fluentd. Farther information: http://docs.fluentd.org/articles/config-file -->
    <tag>debug</tag>
    <!-- [Optional] Label for Fluentd. Farther information: http://docs.fluentd.org/articles/config-file -->
    <label>logback</label>

    <!-- Host name/address and port number which Fluentd placed -->
    <remoteHost>localhost</remoteHost>
    <port>24224</port>

    <!-- [Optional] Additional fields(Pairs of key: value) -->
    <!--
    <additionalField>
      <key>foo</key>
      <value>bar</value>
    </additionalField>
    <additionalField>
      <key>foo2</key>
      <value>bar2</value>
    </additionalField>
    -->
    <!-- [Optional] Ignored fields. The fields won't be emitted to Fluentd -->

    <ignoredField>throwable</ignoredField>
    <ignoredField>thread</ignoredField>

    <!-- [Optional] Configurations to customize Fluent-logger-java's behavior -->
    <bufferCapacity>16777216</bufferCapacity> <!-- in bytes -->
    <timeout>10000</timeout> <!-- in milliseconds -->

    <!--  [Optional] If true, Map Marker is expanded instead of nesting in the marker name -->
    <flattenMapMarker>false</flattenMapMarker>
    <!--  [Optional] default "marker" -->
    <markerPrefix></markerPrefix>

    <!-- [Optional] Message encoder if you want to customize message -->
    <encoder>
      <pattern><![CDATA[%dateHH:mm:ss.SSS [%thread] %-5level %logger15#%line %message]]></pattern>
    </encoder>

    <!-- [Optional] Message field key name. Default: "message" -->
    <messageFieldKeyName>msg</messageFieldKeyName>

  </appender>

  <appender name="FLUENT" class="ch.qos.logback.classic.AsyncAppender">
    <!-- Max queue size of logs which is waiting to be sent (When it reach to the max size, the log will be disappeared). -->
    <queueSize>999</queueSize>
    <!-- Never block when the queue becomes full. -->
    <neverBlock>true</neverBlock>
    <!-- The default maximum queue flush time allowed during appender stop.
         If the worker takes longer than this time it will exit, discarding any remaining items in the queue.
         10000 millis
     -->
    <maxFlushTime>1000</maxFlushTime>
    <appender-ref ref="FLUENT_SYNC" />
  </appender>

4. 时间戳

  • fluentd 只能到秒,没有毫秒的情况下,日志顺序会出现混乱。
  • logstash 可以到毫秒,相对fluentd好很多,但是一毫秒输出大量日志时(不合理)也会出现乱序。

4.1 解决办法

logstash 通过配置使用纳秒:

5. 容器日志配置

容器日志重定向到 fluentd 或 logstash 的相关配置。

5.1 docker log-driver 配置

5.2 docker compose logging

5.3 logstash 使用环境变量

6. 集成 Skywalking TID

文档: https://skywalking.apache.org/docs/skywalking-java/next/en/setup/service-agent/java-agent/application-toolkit-logback-1.x/

<appender name="stash_sync" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
  <destination>IP:port</destination>

  <!-- encoder is required -->
  <encoder class="net.logstash.logback.encoder.LogstashEncoder" >
    <provider class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.logstash.TraceIdJsonProvider"/>
  </encoder>
</appender>

7. 注意点

7.1 通过索引模板设置自定义字段类型

文档: https://www.elastic.co/guide/en/elasticsearch/reference/current/index-templates.html

Kibana 可以按步操作。也可以直接操作ES。

PUT _index_template/template_1

  "index_patterns": ["te*", "bar*"],
  "template": 
    "settings": 
      "number_of_shards": 1
    ,
    "mappings": 
      "_source": 
        "enabled": true
      ,
      "properties": 
        "host_name": 
          "type": "keyword"
        ,
        "created_at": 
          "type": "date",
          "format": "EEE MMM dd HH:mm:ss Z yyyy"
        
      
    ,
    "aliases": 
      "mydata":  
    
  ,
  "priority": 500,
  "composed_of": ["component_template1", "runtime_component_template"], 
  "version": 3,
  "_meta": 
    "description": "my custom"
  

7.2 自定义mdc字段如果不想默认text类型,需要提前在索引添加:

文档: https://www.elastic.co/guide/en/elasticsearch/reference/current/explicit-mapping.html

PUT /my-index-000001/_mapping

  "properties": 
    "employee-id": 
      "type": "keyword",
      "index": false
    
  

以上是关于Java 输出 JSON 日志的主要内容,如果未能解决你的问题,请参考以下文章

Java 输出 JSON 日志

java 日志的数据脱敏

K8S日志收集:容器日志输出JSON,自动采集至ES

ELKstack-基于java工程tomcat应用日志处理过程-01

ELK之nginx日志使用json格式输出

python日志输出的内容修改为json格式