Use tee (or equivalent) but limit max file size or rotate to new file

前端 未结 7 1558
抹茶落季
抹茶落季 2020-12-05 02:24

I would like to capture output from a UNIX process but limit max file size and/or rotate to a new file.

I have seen logrotate, but it does not work real-time. As I

相关标签:
7条回答
  • 2020-12-05 03:23

    The most straightforward way to solve this is probably to use python and the logging module which was designed for this purpose. Create a script that read from stdin and write to stdout and implement the log-rotation described below.

    The "logging" module provides the

    class logging.handlers.RotatingFileHandler(filename, mode='a', maxBytes=0,
                  backupCount=0, encoding=None, delay=0)
    

    which does exactly what you are asking about.

    You can use the maxBytes and backupCount values to allow the file to rollover at a predetermined size.

    From docs.python.org

    Sometimes you want to let a log file grow to a certain size, then open a new file and log to that. You may want to keep a certain number of these files, and when that many files have been created, rotate the files so that the number of files and the size of the files both remain bounded. For this usage pattern, the logging package provides a RotatingFileHandler:

    import glob
    import logging
    import logging.handlers
    
    LOG_FILENAME = 'logging_rotatingfile_example.out'
    
    # Set up a specific logger with our desired output level
    my_logger = logging.getLogger('MyLogger')
    my_logger.setLevel(logging.DEBUG)
    
    # Add the log message handler to the logger
    handler = logging.handlers.RotatingFileHandler(
                  LOG_FILENAME, maxBytes=20, backupCount=5)
    
    my_logger.addHandler(handler)
    
    # Log some messages
    for i in range(20):
        my_logger.debug('i = %d' % i)
    
    # See what files are created
    logfiles = glob.glob('%s*' % LOG_FILENAME)
    
    for filename in logfiles:
        print(filename)
    

    The result should be 6 separate files, each with part of the log history for the application:

    logging_rotatingfile_example.out
    logging_rotatingfile_example.out.1
    logging_rotatingfile_example.out.2
    logging_rotatingfile_example.out.3
    logging_rotatingfile_example.out.4
    logging_rotatingfile_example.out.5
    

    The most current file is always logging_rotatingfile_example.out, and each time it reaches the size limit it is renamed with the suffix .1. Each of the existing backup files is renamed to increment the suffix (.1 becomes .2, etc.) and the .6 file is erased.

    Obviously this example sets the log length much much too small as an extreme example. You would want to set maxBytes to an appropriate value.

    0 讨论(0)
提交回复
热议问题