What's more efficient - storing logs in sql database or files?

后端 未结 9 1481
暗喜
暗喜 2020-12-15 03:30

I have few scripts loaded by cron quite often. Right now I don\'t store any logs, so if any script fails to load, I won\'t know it till I see results - and even when I notic

相关标签:
9条回答
  • 2020-12-15 04:09

    It depends on the size of the logs and on the concurrency level. Because of the latest, your test is completely invalid - if there are 100 users on the site, and you have let's say 10 threads writing to the same file, fwrite won't be so faster. One of the things RDBMS provides is concurrency control.

    It depends on the requirements and lot kind of analysis you want to perform. Just reading records is easy, but what about aggregating some data over a defined period?

    Large scale web sites use systems like Scribe for writing their logs.

    If you are talking about 5 records per minute however, this is really low load, so the main question is how you are going to read them. If a file is suitable for your needs, go with the file. Generally, append-only writes (usual for logs) are really fast.

    0 讨论(0)
  • 2020-12-15 04:09

    Personally, I prefer log files so I've created two functions:

    <?php
    function logMessage($message=null, $filename=null)
    {
        if (!is_null($filename))
        {
            $logMsg=date('Y/m/d H:i:s').": $message\n";
            error_log($logMsg, 3, $filename);
        }
    }
    
    function logError($message=null, $filename=null)
    {
        if (!is_null($message))
        {
            logMessage("***ERROR*** {$message}", $filename);
        }
    }
    ?>
    

    I define a constant or two (I use ACTIVITY_LOG and ERROR_LOG both set to the same file so you don't need to refer to two files side by side to get an overall view of the running) and call as appropriate. I've also created a dedicated folder (/var/log/phplogs) and each application that I write has its own log file. Finally, I rotate logs so that I have some history to refer back to for customers.

    Liberal use of the above functions means that I can trace the execution of apps fairly easily.

    0 讨论(0)
  • 2020-12-15 04:13

    Commenting on your findings.

    Regarding the writing to the file you are probably right.
    Regarding the reading you are dead wrong.

    Writing to a database:

    1. MyISAM locks the whole table on inserts, causing a lock contention. Use InnoDB, which has row locking.
    2. Contrary to 1. If you want to do fulltext searches on the log. Use MyISAM, it supports fulltext indexes.
    3. If you want to be really fast you can use the memory engine, this writes the table in RAM. Transfer the data to a disk-based table when CPU load is low.

    Reading from the database

    This is where the database truly shines.
    You can combine all sorts of information from different entries, much much faster and easier than you can ever do from a flat file.

    SELECT logdate, username, action FROM log WHERE userid = '1' /*root*/ AND error = 10;
    

    If you have indexes on the fields used in the where clause the result will return almost instantly, try doing that on a flat file.

    SELECT username, count(*) as error_count 
    FROM log 
    WHERE error <> 0 
    GROUP BY user_id WITH ROLLUP
    

    Never mind the fact that the table is not normalized, this will be much much slower and harder to do with a flat file.
    It's a no brainer really.

    0 讨论(0)
提交回复
热议问题