Getting logs twice in AWS lambda function

前端 未结 2 1031
梦谈多话
梦谈多话 2021-02-08 10:15

I\'m attempting to create a centralized module to set up my log formatter to be shared across a number of python modules within my lambda function. This function will ultimately

2条回答
  •  独厮守ぢ
    2021-02-08 10:43

    I'm not sure whether this is the cause of your problem, but by default, Python's loggers propagate their messages up to logging hierarchy. As you probably know, Python loggers are organized in a tree, with the root logger at the top and other loggers below it. In logger names, a dot (.) introduces a new hierarchy level. So if you do

    logger = logging.getLogger('some_module.some_function`)
    

    then you actually have 3 loggers:

    The root logger (`logging.getLogger()`)
        A logger at module level (`logging.getLogger('some_module'))
            A logger at function level (`logging.getLogger('some_module.some_function'))
    

    If you emit a log message on a logger and it is not discarded based on the loggers minimum level, then the message is passed on to the logger's handlers and to its parent logger. See this flowchart for more information.

    If that parent logger (or any logger higher up in the hierarchy) also has handlers, then they are called, too.

    I suspect that in your case, either the root logger or the main logger somehow ends up with some handlers attached, which leads to the duplicate messages. To avoid that, you can set propagate in your logger to False or only attach your handlers to the root logger.

提交回复
热议问题