Here\'s my code:
#include
#include
#include
#include
#include
#
Even after the first command of your pipeline exits (and thust closes stdout=~fdPipe[1]
), the parent still has fdPipe[1]
open.
Thus, the second command of the pipeline has a stdin=~fdPipe[0]
that never gets an EOF, because the other endpoint of the pipe is still open.
You need to create a new pipe(fdPipe)
for each |
, and make sure to close both endpoints in the parent; i.e.
for cmd in cmds
if there is a next cmd
pipe(new_fds)
fork
if child
if there is a previous cmd
dup2(old_fds[0], 0)
close(old_fds[0])
close(old_fds[1])
if there is a next cmd
close(new_fds[0])
dup2(new_fds[1], 1)
close(new_fds[1])
exec cmd || die
else
if there is a previous cmd
close(old_fds[0])
close(old_fds[1])
if there is a next cmd
old_fds = new_fds
if there are multiple cmds
close(old_fds[0])
close(old_fds[1])
Also, to be safer, you should handle the case of fdPipe
and {STDIN_FILENO,STDOUT_FILENO}
overlapping before performing any of the close
and dup2
operations. This may happen if somebody has managed to start your shell with stdin or stdout closed, and will result in great confusion with the code here.
fdPipe1 fdPipe3
v v
cmd1 | cmd2 | cmd3 | cmd4 | cmd5
^ ^
fdPipe2 fdPipe4
In addition to making sure you close the pipe's endpoints in the parent, I was trying to make the point that fdPipe1
, fdPipe2
, etc. cannot be the same pipe()
.
/* suppose stdin and stdout have been closed...
* for example, if your program was started with "./a.out <&- >&-" */
close(0), close(1);
/* then the result you get back from pipe() is {0, 1} or {1, 0}, since
* fd numbers are always allocated from the lowest available */
pipe(fdPipe);
close(0);
dup2(fdPipe[0], 0);
I know you don't use close(0)
in your present code, but the last paragraph is warning you to watch out for this case.
The following minimal change to your code makes it work in the specific failing case you mentioned:
@@ -12,6 +12,4 @@ pid_t pid; - pipe(fdPipe); - while(1) { bBuffer = readline("Shell> "); @@ -29,4 +27,6 @@ } while(aPtr); + pipe(fdPipe); + for(i = 0; i < pCount; i++) { aCount = -1; @@ -72,4 +72,7 @@ } + close(fdPipe[0]); + close(fdPipe[1]); + for(i = 0; i < pCount; i++) { waitpid(lPids[i], &status, 0);
This won't work for more than one command in the pipeline; for that, you'd need something like this: (untested, as you have to fix other things as well)
@@ -9,9 +9,7 @@ int main(int argc, char *argv[]) { char *bBuffer, *sPtr, *aPtr = NULL, *pipeComms[NUMPIPES], *cmdArgs[10]; - int fdPipe[2], pCount, aCount, i, status, lPids[NUMPIPES]; + int fdPipe[2], fdPipe2[2], pCount, aCount, i, status, lPids[NUMPIPES]; pid_t pid; - pipe(fdPipe); - while(1) { bBuffer = readline("Shell> "); @@ -32,4 +30,7 @@ aCount = -1; + if (i + 1 < pCount) + pipe(fdPipe2); + do { aPtr = strsep(&pipeComms[i], " "); @@ -43,11 +44,12 @@ if(pid == 0) { - if(i == 0) { - close(fdPipe[0]); + if(i + 1 < pCount) { + close(fdPipe2[0]); - dup2(fdPipe[1], STDOUT_FILENO); + dup2(fdPipe2[1], STDOUT_FILENO); - close(fdPipe[1]); - } else if(i == 1) { + close(fdPipe2[1]); + } + if(i != 0) { close(fdPipe[1]); @@ -70,4 +72,17 @@ } } + + if (i != 0) { + close(fdPipe[0]); + close(fdPipe[1]); + } + + fdPipe[0] = fdPipe2[0]; + fdPipe[1] = fdPipe2[1]; + } + + if (pCount) { + close(fdPipe[0]); + close(fdPipe[1]); }
You should have an error exit after execvp() - it will fail sometime.
exit(EXIT_FAILURE);
As @uncleo points out, the argument list must have a null pointer to indicate the end:
cmdArgs[aCount] = 0;
It is not clear to me that you let both programs run free - it appears that you require the first program in the pipeline to finish before starting the second, which is not a recipe for success if the first program blocks because the pipe is full.
One potential problem is that cmdargs may have garbage at the end of it. You're supposed to terminate that array with a null pointer before passing it to execvp().
It looks like grep is accepting STDIN, though, so that might not be causing any problems (yet).
Jonathan has the right idea. You rely on the first process to fork all the others. Each one has to run to completion before the next one is forked.
Instead, fork the processes in a loop like you are doing, but wait for them outside the inner loop, (at the bottom of the big loop for the shell prompt).
loop //for prompt
next prompt
loop //to fork tasks, store the pids
if pid == 0 run command
else store the pid
end loop
loop // on pids
wait
end loop
end loop
I think your forked processes will continue executing.
Try either:
the file descriptors from the pipe are reference counted, and incremented with each fork. for every fork, you have to issue a close on both descriptors in order to reduce the reference count to zero and allow the pipe to close. I'm guessing.