First of all, I know this question (or close variations) have been asked a thousand times. I really spent a few hours looking in the obvious and the
Use fork()
and then call system
in the child process.
my $pid = fork();
if (defined $pid && $pid == 0) {
# child
system($command); # or exec($command)
exit 0;
}
# parent
# ... continue ...
Another option would be to set up a gearman server and a worker process (or processes) that do the emailing. That way you control how much emailing is going on simultaneously, and no forking is necessary. The client (your program) can add a task to the gearman server (in the background without waiting for a result if desired), and the jobs are queued until the server passes the job to an available worker. There are perl and php APIs to gearman, so it's very convenient.
Essentially you need to 'daemonize' a process -- fork off a child, and then entirely disconnect it from the parent so that the parent can safely terminate without affecting the child.
You can do this easily with the CPAN module Proc::Daemon:
use Proc::Daemon;
# do everything you need to do before forking the child...
# make into daemon; closes all open fds
Proc::Daemon::Init();
Sometimes STDERR and STDOUT can also lock the system... To get both, I use (for most shell environments (bash, csh, etc) that I use...):
system("php sender.php > /dev/null 2>&1 &");
Managed to solve the problem. Apparently what was keeping it from returning was that calling the sender that way didn't disconnect the stdout. So, the solution was simply changing the system call to:
system("php sender.php > /dev/null &");
Thanks everybody for the help. In fact, it was while reading the whole story about "daemonizing" a process that I got the idea to disconnect the stdout.