I am designing a dedicated syslog-processing daemon for Linux that needs to be robust and scalable and I\'m debating multithread vs. multiprocess.
The obvious objection
If you want robustness, use multi-processing.
The processes will share the logging load between them. Sooner or later, a logging request will hit a bug and crash the logger. With multi-processing, you only lose one process and so only that one logging request (which you couldn't have handled anyway, because of the bug).
Multi-threading is vulnerable to crashes, since one fatal bug takes out your single process.
Mulit-processing is in some ways more technically challenging, since you have to balance workload over processes, which may entail using shared memory.