I have few scripts loaded by cron quite often. Right now I don\'t store any logs, so if any script fails to load, I won\'t know it till I see results - and even when I notic
It depends on the size of the logs and on the concurrency level. Because of the latest, your test is completely invalid - if there are 100 users on the site, and you have let's say 10 threads writing to the same file, fwrite won't be so faster. One of the things RDBMS provides is concurrency control.
It depends on the requirements and lot kind of analysis you want to perform. Just reading records is easy, but what about aggregating some data over a defined period?
Large scale web sites use systems like Scribe for writing their logs.
If you are talking about 5 records per minute however, this is really low load, so the main question is how you are going to read them. If a file is suitable for your needs, go with the file. Generally, append-only writes (usual for logs) are really fast.
Personally, I prefer log files so I've created two functions:
<?php
function logMessage($message=null, $filename=null)
{
if (!is_null($filename))
{
$logMsg=date('Y/m/d H:i:s').": $message\n";
error_log($logMsg, 3, $filename);
}
}
function logError($message=null, $filename=null)
{
if (!is_null($message))
{
logMessage("***ERROR*** {$message}", $filename);
}
}
?>
I define a constant or two (I use ACTIVITY_LOG and ERROR_LOG both set to the same file so you don't need to refer to two files side by side to get an overall view of the running) and call as appropriate. I've also created a dedicated folder (/var/log/phplogs) and each application that I write has its own log file. Finally, I rotate logs so that I have some history to refer back to for customers.
Liberal use of the above functions means that I can trace the execution of apps fairly easily.
Commenting on your findings.
Regarding the writing to the file you are probably right.
Regarding the reading you are dead wrong.
Writing to a database:
memory
engine, this writes the table in RAM. Transfer the data to a disk-based table when CPU load is low. Reading from the database
This is where the database truly shines.
You can combine all sorts of information from different entries, much much faster and easier than you can ever do from a flat file.
SELECT logdate, username, action FROM log WHERE userid = '1' /*root*/ AND error = 10;
If you have indexes on the fields used in the where
clause the result will return almost instantly, try doing that on a flat file.
SELECT username, count(*) as error_count
FROM log
WHERE error <> 0
GROUP BY user_id WITH ROLLUP
Never mind the fact that the table is not normalized, this will be much much slower and harder to do with a flat file.
It's a no brainer really.