如何使用tornado的日志文件输出

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了如何使用tornado的日志文件输出相关的知识,希望对你有一定的参考价值。

参考技术A 1.先看log4j的配置文件: log4j.rootLogger=INFO,R,Client log4j.appender.R=org.apache.log4j.RollingFileAppender log4j.appender.R.File=./log/server.log log4j.appender.R.MaxFileSize=5MB log4j.appender.R.MaxBackupIndex=10 log4j.append

如何将日志输出到同一个文件中

我使用tornado构建我的Web服务,并使用logging来获取tornado的记录器,一切看起来都很成功。但由于服务是multiprocessing,所以今天,当我检查日志时,我发现一些信息丢失了。所以我想问一下,如果我为不同的进程打开不同的日志,这个问题可以解决吗?

或者,如果在输出日志时可以将任何其他解决方案应用于multi-processes服务器。

答案

这是我在我的解决方案中使用的代码:

https://gist.github.com/hcl14/259432dd648180bf2af672c26d9df9fc

它同时运行带有烧瓶应用程序的龙卷风服务器,登录屏幕,进入公共日志(stdout.log)和特定于进程的日志。只需在代码中查看注释,如何导入和组织所有内容。

要运行所有内容,将所有三个文件放在一起,创建logs子文件夹,运行mainprogram.py并测试发送不同的POST请求。然后,您可以检查日志文件夹中的文件,以确保正确记录所有内容。

mainprogram.py是一个主文件,其中logger被初始化并放入global_vars模块。必须在任何地方导入此模块,并从那里派生特定于流程的记录器。原因是这只是跨模块在全局范围内存储变量的便捷方式。当新进程分叉时,它会使用特定于进程的记录器覆盖记录器,因此进程使用的所有模块都将写入相应的记录器:

separate_logging.py(为方便起见,将其改为mainprogram.py):

import logging
import os
import multiprocessing

from tornado.wsgi import WSGIContainer
from tornado.httpserver import HTTPServer
from tornado.ioloop import IOLoop
from tornado.options import define, options

# a way to pass variables into separate modules, 
# process-specific because of 'fork' mode
# (each process will have its own version of this module)
import global_vars



logPath = os.environ.get('LOGFOLDER','logs')
fileName = os.environ.get('LOGFILE', "stdout.log")

address = os.environ.get('ADDRESS','0.0.0.0')
port = os.environ.get('PORT','8888')

NUM_PROCESSES = os.cpu_count()

# initializes the main logger to be used across all modules

logFormatter = logging.Formatter("%(asctime)s [%(processName)-12.12s] [%(threadName)-12.12s] [%(levelname)-5.5s] [%(filename)s:%(lineno)d] %(message)s")
rootLogger = logging.getLogger(__name__)

# first handler is general log
fileHandler = logging.FileHandler("{0}/{1}".format(logPath, fileName))
fileHandler.setFormatter(logFormatter)
rootLogger.addHandler(fileHandler)
# second handler is logging to console
consoleHandler = logging.StreamHandler()
consoleHandler.setFormatter(logFormatter)
rootLogger.addHandler(consoleHandler) 

rootLogger.setLevel("DEBUG") # log everything
rootLogger.propagate = False

# until process branches, it uses rootLogger
global_vars.multiprocess_globals["logger"] = rootLogger



# third handler is process-specific log, initialized in processes
def init_logger2(secondary_logfile, rootLogger):
    fileHandler1 = logging.FileHandler("{0}/{1}".format(logPath, 'process_'+str(secondary_logfile)+'.log'))
    fileHandler1.setFormatter(logFormatter)
    rootLogger.addHandler(fileHandler1)

    return rootLogger



# external modules import goes here!
# otherwise they will not find any logger!
from flask_app import create_app




# ---------------

# process function
def run(process_id):

    # initialize process-specific logger
    processLogger = init_logger2(process_id, rootLogger)
    global_vars.multiprocess_globals["logger"] = processLogger

    # here you can run tornado app:

    try:
        app = create_app()  # pass interests to flask app        
        ioloop = IOLoop()
        http_server_api = HTTPServer(WSGIContainer(app))

        # reuse_port allows multiple servers co-exist
        # as separate processes
        http_server_api.bind(address=address, port=port, reuse_port=True) 
        http_server_api.start()

        processLogger.info("Process %s started %s:%s" % (process_id, address,
                                            port))

        ioloop.start()
    except Exception as e:
        processLogger.error(e)





# start processes (tornado servers)
if __name__ == '__main__':

    processes = []
    for i in range(1,NUM_PROCESSES):
        p = multiprocessing.Process(target=run, args=(str(i),))
        p.daemon = False # if we want to spawn child processes
        #p.daemon = True # if we want to gracefully stop program
        processes.append(p)


    # Run processes:

    for p in processes:
        p.start()

    # block program from exiting
    for p in processes:
        p.join() 

global_vars.py

# global variables to be used across processes.
# Separate file is needed to make globals accessible from different submodules.

# Global variables which need to exist in the scope of each process.
# Variables are added into this dictionary during process initialization.
multiprocess_globals = {}

flask_app.py:只是一个烧瓶中的应用程序,它使用更深层次的模块。在这种情况下,这些模块也应该像flask_app那样导入logger:

# create flask app to be run by tornado process

# process-specific globals
import global_vars 

from flask import Flask, request, Response, json, abort, jsonify
import json as json2

# from deeper_module import do_something

app = Flask(__name__)
app.config.from_object(__name__)

# get process-specific logger
# do such import in any submodule !
# as global_vars is changed on process fork!
rootLogger = global_vars.multiprocess_globals["logger"]
logger = rootLogger.getChild(__name__)


def create_app():

    app = Flask(__name__)

    @app.route('/my_url1', methods=['POST'])
    def my_url1():

        body = json.loads(request.data)

        # debug to process-specific logger
        logger.debug(body)

        # do_something()

        response = app.response_class(
            response=json.dumps({'response':'good'}),
            status=200,
            mimetype='application/json'
        )
        return response

    # another functions

    return app

在主程序中初始化global_vars之后,必须导入所有使用特定记录器的模块。启动程序后,您将看到:

2018-10-16 12:50:20,988 [Process-1   ] [MainThread  ] [INFO ] [separate_logging.py:86] Process 1 started 0.0.0.0:8888
2018-10-16 12:50:20,989 [Process-2   ] [MainThread  ] [INFO ] [separate_logging.py:86] Process 2 started 0.0.0.0:8888
2018-10-16 12:50:20,990 [Process-3   ] [MainThread  ] [INFO ] [separate_logging.py:86] Process 3 started 0.0.0.0:8888
2018-10-16 12:50:20,991 [Process-4   ] [MainThread  ] [INFO ] [separate_logging.py:86] Process 4 started 0.0.0.0:8888
2018-10-16 12:50:20,991 [Process-6   ] [MainThread  ] [INFO ] [separate_logging.py:86] Process 6 started 0.0.0.0:8888
2018-10-16 12:50:20,992 [Process-7   ] [MainThread  ] [INFO ] [separate_logging.py:86] Process 7 started 0.0.0.0:8888
2018-10-16 12:50:20,993 [Process-5   ] [MainThread  ] [INFO ] [separate_logging.py:86] Process 5 started 0.0.0.0:8888

然后,您可以触发POST请求,该请求将由其中一个已启动的进程处理:

$ curl -H "Content-Type: application/json" -X POST -d '{"bla-bla":"bla"}' 127.0.0.1:8888/my_url1

结果是:

2018-10-16 12:51:40,040 [Process-5 ] [MainThread ] [DEBUG] [flask_app.py:31] {'bla-bla': 'bla'}

您可以检查此行是否也会出现在常规日志logs/stdout.loglogs/process_5.log中。

以上是关于如何使用tornado的日志文件输出的主要内容,如果未能解决你的问题,请参考以下文章

自定义tornado日志格式

如何使用 Tornado HTTPRequest 发布原始文件

如何使用 Popen 同时写入标准输出和日志文件?

如何使用 Tornado 向 Web 套接字发送消息

tornado日志管理

如何使用PowerShell将选择查询的输出从DataSet复制到日志文件?