问题
Here is the C# code I'm using to launch a subprocess and monitor its output:
using (process = new Process()) {
process.StartInfo.FileName = executable;
process.StartInfo.Arguments = args;
process.StartInfo.UseShellExecute = false;
process.StartInfo.RedirectStandardOutput = true;
process.StartInfo.RedirectStandardInput = true;
process.StartInfo.CreateNoWindow = true;
process.Start();
using (StreamReader sr = process.StandardOutput) {
string line = null;
while ((line = sr.ReadLine()) != null) {
processOutput(line);
}
}
if (process.ExitCode == 0) {
jobStatus.State = ActionState.CompletedNormally;
jobStatus.Progress = 100;
} else {
jobStatus.State = ActionState.CompletedAbnormally;
}
OnStatusUpdated(jobStatus);
}
I am launching multiple subprocesses in separate ThreadPool threads (but no more than four at a time, on a quad-core machine). This all works fine.
The problem I am having is that one of my subprocesses will exit, but the corresponding call to sr.ReadLine()
will block until ANOTHER one of my subprocesses exits. I'm not sure what it returns, but this should NOT be happening unless there is something I am missing.
There's nothing about my subprocess that would cause them to be "linked" in any way - they don't communicate with each other. I can even look in Task Manager / Process Explorer when this is happening, and see that my subprocess has actually exited, but the call to ReadLine()
on its standard output is still blocking!
I've been able to work around it by spinning the output monitoring code out into a new thread and doing a process.WaitForExit()
, but this seems like very odd behavior. Anyone know what's going on here?
回答1:
I think it's not your code that's the issue. Blocking calls can unblock for a number of reasons, not only because their task was accomplished.
I don't know about Windows, I must admit, but in the Unix world, when a child finishes, a signal is sent to the parent process and this wakes him from any blocking calls. This would unblock a read on whatever input the parent was expecting.
It wouldn't surprise me if Windows worked similarly. In any case, read up on the reasons why a blocking call may unblock.
回答2:
The MSDN documents about ProcessStartInfo.RedirectStandardOutput discuss in detail deadlocks that can arise when doing what you are doing here. A solution is provided that uses ReadToEnd
but I imagine the same advice and remedy would apply when you use ReadLine
.
Synchronous read operations introduce a dependency between the caller reading from the StandardOutput stream and the child process writing to that stream. These dependencies can cause deadlock conditions. When the caller reads from the redirected stream of a child process, it is dependent on the child. The caller waits for the read operation until the child writes to the stream or closes the stream. When the child process writes enough data to fill its redirected stream, it is dependent on the parent. The child process waits for the next write operation until the parent reads from the full stream or closes the stream. The deadlock condition results when the caller and child process wait for each other to complete an operation, and neither can continue. You can avoid deadlocks by evaluating dependencies between the caller and child process.
The best solution seems to be async I/O rather than the sync methods:
You can use asynchronous read operations to avoid these dependencies and their deadlock potential. Alternately, you can avoid the deadlock condition by creating two threads and reading the output of each stream on a separate thread.
There is a sample here that ought to be useful to you if you go this route.
来源:https://stackoverflow.com/questions/4055810/c-read-on-subprocess-stdout-blocks-until-another-subprocess-finishes