问题
I've got a mod_wsgi server setup with 5 processes and a celery worker queue (2 of them) all on the same VM. I'm running into problems where the loggers are stepping on each other and while it appears there are some solutions if you are using python multiprocessing, I don't see how that applies to mod_wsgi processes combined also with celery processes.
What is everyone else doing with this problem? The celery tasks are using code that logs in the same files as the webserver code.
Do I somehow have to add a pid to the logfilename? That seems like it could get messy fast with lots of logfiles with unique names and no real coherent way to pull them all back together.
Do I have to write a log daemon that allows all the processes to log to it? If so, where do you start it up so that it is ready for all of the processes that might want to log.....
Surely there is some kind of sane pattern out there for this, I just don't know what it is yet.
回答1:
As mentioned in the docs, you could use a separate server process which listens on a socket and logs to different destinations, and has whatever logging configuration you want (in terms of files, console and so on). The other processes just configure a SocketHandler
to send their events to the server process. This is generally better than separate log files with pids in their filenames.
The logging docs contain an example socket server implementation which you can adapt to your needs.
来源:https://stackoverflow.com/questions/19235402/how-to-do-logging-with-multiple-django-wsgi-processes-celery-on-the-same-webse