Given the wealth of log file analysis programs out there and the number of server logs which are plain text, it's well established that plain text log files do scale and are fairly easily queryable.
In general, most SQL databases are optimised for updating data robustly, rather than simply appending to the end of a time series. The implementation assumes that data should not be duplicated and there are integrity constraints relating to references to other relations/tables which need to be enforced. Since a log never updates an existing entry, and so has no constraints which can be violated or cascading deletions, there's a lot there which you'll never use.
You might prefer a database for transaction scalability - say if you want to centralise many logs into one database so are actually getting some concurrency ( though it's not intrinsic to the problem - having separate logs on one server would also allow this, but you then have to merge them to total for all your systems ).
Using an SQL database is a bit more complicated than just appending a file or two and calling fflush. OTOH if you are very used to working with SQL and are already using a database in the project then there's little overhead in also using a database for logging.