I\'m attempting to create a centralized module to set up my log formatter to be shared across a number of python modules within my lambda function. This function will ultimately
I'm not sure whether this is the cause of your problem, but by default, Python's loggers propagate their messages up to logging hierarchy. As you probably know, Python loggers are organized in a tree, with the root
logger at the top and other loggers below it. In logger names, a dot (.
) introduces a new hierarchy level. So if you do
logger = logging.getLogger('some_module.some_function`)
then you actually have 3 loggers:
The root logger (`logging.getLogger()`)
A logger at module level (`logging.getLogger('some_module'))
A logger at function level (`logging.getLogger('some_module.some_function'))
If you emit a log message on a logger and it is not discarded based on the loggers minimum level, then the message is passed on to the logger's handlers and to its parent logger. See this flowchart for more information.
If that parent logger (or any logger higher up in the hierarchy) also has handlers, then they are called, too.
I suspect that in your case, either the root logger or the main
logger somehow ends up with some handlers attached, which leads to the duplicate messages. To avoid that, you can set propagate in your logger to False
or only attach your handlers to the root logger.
AWS Lambda also sets up a handler, on the root logger, and anything written to stdout
is captured and logged as level INFO
. Your log message is thus captured twice:
This is why the messages all start with (asctime) [(levelname)]-(module):(lineno),
information; the root logger is configured to output messages with that format and the information written to stdout is just another %(message)
part in that output.
Just don't set a handler when you are in the AWS environment, or, disable propagation of the output to the root handler and live with all your messages being recorded as INFO
messages by AWS; in the latter case your own formatter could include the levelname
level information in the output.
You can disable log propagation with logger.propagate = False
, at which point your message is only going to be passed to your handler, not to to the root handler as well.
Another option is to just rely on the AWS root logger configuration. According to this excellent reverse engineering blog post the root logger is configured with:
logging.Formatter.converter = time.gmtime
logger = logging.getLogger()
logger_handler = LambdaLoggerHandler()
logger_handler.setFormatter(logging.Formatter(
'[%(levelname)s]\t%(asctime)s.%(msecs)dZ\t%(aws_request_id)s\t%(message)s\n',
'%Y-%m-%dT%H:%M:%S'
))
logger_handler.addFilter(LambdaLoggerFilter())
logger.addHandler(logger_handler)
This replaces the time.localtime
converter on logging.Formatter
with time.gmtime
(so timestamps use UTC rather than locatime), sets a custom handler that makes sure messages go to the Lambda infrastructure, configures a format, and adds a filter object that only adds aws_request_id
attribute to records (so the above formatter can include it) but doesn't actually filter anything.
You could alter the formatter on that handler by updating the attributes on the handler.formatter
object:
for handler in logging.getLogger().handlers:
formatter = handler.formatter
if formatter is not None and 'aws_request_id' in formatter._fmt:
# this is the AWS Lambda formatter
# formatter.datefmt => '%Y-%m-%dT%H:%M:%S'
# formatter._style._fmt =>
# '[%(levelname)s]\t%(asctime)s.%(msecs)dZ'
# '\t%(aws_request_id)s\t%(message)s\n'
and then just drop your own logger handler entirely. You do want to be careful with this; AWS Lambda infrastructure could well be counting on a specific format being used. The output you show in your question doesn't include the date component (the %Y-%m-%dT
part of the formatter.datefmt
string) which probably means that the format has been parsed out and is being presented to you in a web application view of the data.