How to handle disk full errors while logging in logback?

前端 未结 3 1735
眼角桃花
眼角桃花 2021-02-08 12:36

I am using slf4j+logback for logging in our application. Earlier we were using jcl+log4j and moved recently.

Due to the high amount of logging in our application, there

相关标签:
3条回答
  • 2021-02-08 13:18

    2 real options:

    • add a cron task on linux (or scheduled one on windows) to clean up your mess (incl. gzip some, if need be).
    • buy a larger hard disk and manually perform the maintenance

    • +-reduce logging

    Disk full is like OOM, you can't know what fails 1st when catch it. Dealing w/ out of memory (or disk) is by preventing it. There could be a lot of cases when extra disk space could be needed and the task failed.

    0 讨论(0)
  • 2021-02-08 13:24

    You may try extending the slf4j.Logger class, specifically the info, debug, trace and other methods and manually query for the available space (via File.getUsableSpace() ) before every call.

    That way you will not need any application dependency

    0 讨论(0)
  • 2021-02-08 13:29

    You do not have to do or configure anything. Logback is designed to handle this situation quite nicely. Once target disk is full, logback's FileAppender will stop writing to it for a certain short amount of time. Once that delay elapses, it will attempt to recover. If the recovery attempt fails, the waiting period is increased gradually up to a maximum of 1 hour. If the recovery attempt succeeds, FileAppender will start logging again.

    The process is entirely automatic and extends seamlessly to RollingFileAppender. See also graceful recovery.

    On a more personal note, graceful recovery is one my favorite logback features.

    0 讨论(0)
提交回复
热议问题