We\'re still using old Classic ASP and want to log whenever a user does something in our application. We\'ll write a generic subroutine to take in the details we want to log.
I agree with the above with the perhaps obvious exception of logging database failures which would make logging to the database problematic. This has come up for me in the past as I was dealing with infrequent but regular network failovers.
Edit
In hindsight, a better answer is to log to BOTH file system (first, immediately) and then to a centralized database (even if delayed).
The rationale behind writing to file system that if an external infrastructure dependency like network, database, or security issue prevents you from writing remotely, that at least you have a fall back if you can recover data from the web server's hard disk (something akin to a black box in the airline industry).
In fact, enterprise log managers like Splunk can be configured to scrape your local server log files (e.g. as written by log4net
, the EntLib Logging Application Block
, et al) and then centralize them in a searchable database, where data logged can be mined, graphed, shown on dashboards, etc.
But from an operational perspective, where it is likely that you will have a farm of web servers, and assuming that both the local file system and remote database logging mechanisms are working, the 99% use case for actually trying to find anything in a log file will still be via the central database (ideally with a decent front end system to allow you to query, aggregate and even graph the log data).
Original Answer
If you have the database in place, I would recommend using this for audit records instead of the filesystem.
Rationale:
severity, action type, user, date ...
)select ... from Audits where ...
) vs GrepDelete from Audits where = Date ...
)The decision to use existing db or new one depends - if you have multiple applications (with their own databases) and want to log / audit all actions in all apps centrally, then a centralized db might make sense.
Since you say you want to audit user activity, it may would make sense to audit in the same db as your users table / definition (if applicable).
Either works. It's up to your preference.
We have one central database where ALL of our apps log their error messages. Every app we write is set up in a table with a unique ID, and the error log table contains a foreign key reference to the AppId.
This has been a HUGE bonus for us in giving us one place to monitor errors. We had done it as a file system or by sending emails to a monitored inbox in the past, but we were able to create a fairly nice web app for interacting with the error logs. We have different error levels, and we have an "acknowledged" flag field, so we have a page where we can view unacknowledged events by severity, etc.,
Should we log this to say a txt file using FileSystemObject or log it to a MSSQL database?
Another idea is to write the log file in XML and then query it using XPath. I like to think that this is the best of both worlds.
Looking at the responses, I think the answer may actually be both.
If it's a user error that's likely to happen during expected usage (e.g. user enters an invalid email etc.), that should go into a database to take advantage of easy queries.
If it's a code error that shouldn't happen (can't get username of a logged in user), that should be reserved for a txt file.
This also nicely splits the errors between non-critical and critical. Hopefully the critical error list stays small!
I'm creating a quick prototype of a new project right now, so I'll stick with txt for now.
On another note, email is great for this. Arguably you could just email them to a "bug" account and not store them locally. However this shares the database risk of bad connections.