pipe

Can I stop later parts of a pipeline from running if an earlier part failed?

岁酱吖の 提交于 2021-01-29 03:01:03
问题 I have a piped command such as: set -euxo pipefail echo 'hello' | foo | touch example.sh This is the output: $ set -euxo pipefail $ echo hello $ foo $ touch example.sh pipefail.sh: line 4: foo: command not found I thought set -e would cause the script to exit however. But even though foo is unrecognized, the script is still executing the touch command. How do I get it to exit if foo fails? 回答1: You can't really think of a pipeline of having "earlier" or "later" parts, except insofar as data

Do I have to make a new pipe for every pair of processes in C?

怎甘沉沦 提交于 2021-01-29 01:51:29
问题 If I have 4 processes that I want to pipe: process1 | process2 | process3 | process4 do I have to make 3 individual pipes likes this int pipe1[2]; int pipe2[2]; int pipe3[2]; or can I somehow recycle pipe names like in this pseudocode: int pipe1[2]; // we use ONLY two pipe names: pipe1 int pipe2[2]; // and pipe2 pipe(pipe1); // getting 2 file descriptors here pipe(pipe2); // and 2 here for process=1 to 4 if (process==3) // getting 2 new file descriptors for pipe(pipe1); // process3|process4

Do I have to make a new pipe for every pair of processes in C?

泪湿孤枕 提交于 2021-01-29 01:40:12
问题 If I have 4 processes that I want to pipe: process1 | process2 | process3 | process4 do I have to make 3 individual pipes likes this int pipe1[2]; int pipe2[2]; int pipe3[2]; or can I somehow recycle pipe names like in this pseudocode: int pipe1[2]; // we use ONLY two pipe names: pipe1 int pipe2[2]; // and pipe2 pipe(pipe1); // getting 2 file descriptors here pipe(pipe2); // and 2 here for process=1 to 4 if (process==3) // getting 2 new file descriptors for pipe(pipe1); // process3|process4

Do I have to make a new pipe for every pair of processes in C?

狂风中的少年 提交于 2021-01-29 01:39:55
问题 If I have 4 processes that I want to pipe: process1 | process2 | process3 | process4 do I have to make 3 individual pipes likes this int pipe1[2]; int pipe2[2]; int pipe3[2]; or can I somehow recycle pipe names like in this pseudocode: int pipe1[2]; // we use ONLY two pipe names: pipe1 int pipe2[2]; // and pipe2 pipe(pipe1); // getting 2 file descriptors here pipe(pipe2); // and 2 here for process=1 to 4 if (process==3) // getting 2 new file descriptors for pipe(pipe1); // process3|process4

Do I have to make a new pipe for every pair of processes in C?

久未见 提交于 2021-01-29 01:37:34
问题 If I have 4 processes that I want to pipe: process1 | process2 | process3 | process4 do I have to make 3 individual pipes likes this int pipe1[2]; int pipe2[2]; int pipe3[2]; or can I somehow recycle pipe names like in this pseudocode: int pipe1[2]; // we use ONLY two pipe names: pipe1 int pipe2[2]; // and pipe2 pipe(pipe1); // getting 2 file descriptors here pipe(pipe2); // and 2 here for process=1 to 4 if (process==3) // getting 2 new file descriptors for pipe(pipe1); // process3|process4

xargs pass multiple arguments to perl subroutine?

醉酒当歌 提交于 2021-01-28 13:55:06
问题 I know how to pipe multiple arguments with xargs: echo a b | xargs -l bash -c '1:$0 2:$1' and I know how to pass the array of arguments to my perl module's subroutine from xargs: echo a b | xargs --replace={} perl -I/home/me/module.pm -Mme -e 'me::someSub("{}")' But I can't seem to get multiple individual arguments passed to perl using those dollar references (to satisfy the me::someSub signature): echo a b | xargs -l perl -e 'print("$0 $1")' Just prints: -e So how do I get the shell

Bi-directional inter-process communication using two pipes

为君一笑 提交于 2021-01-28 08:44:22
问题 I am trying to write code that forks a subprocess and communicates with it using pipes. I am using two pipes - one for writing to, and the other for reading from the standard streams of the subprocess. Here's what I have so far: #include <stdio.h> #include <stdlib.h> #include <unistd.h> #include <signal.h> #include <string.h> void read_move(int fd) { FILE *stream = fdopen(fd, "r"); char c; setvbuf(stream, NULL, _IONBF, BUFSIZ); while ((c = fgetc(stream)) != EOF) { putchar(c); } fclose(stream)

Passing a variable between pipes in Gulp 3.9.1

孤街浪徒 提交于 2021-01-28 05:29:39
问题 Using gulp 3.9.1 I am attempting to return a bunch of files and perform a task that requires a var to be passed between two pipes. I'm using node uuid to create a v3 UUID for each file path to ultimately end up with a uuid for each page. I'm grabbing the file path with gulp-print. I want to store that uuid value as a var. In the next pipe Im using gulp-inject-string to write it into the page during the build. Help: Either I need help getting the file path inside the gulp-inject-string pipe or

Sending data to stdin of another process through linux terminal

…衆ロ難τιáo~ 提交于 2021-01-28 04:22:31
问题 I've been trying to send data to stdin of a running process. Here is what I do: In a terminal I've started a c++ program that simply reads a string and prints it. Code excerpt: while (true) { cin >> s; cout << "I've just read " << s << endl; } I get the PID of the running program I go to /proc/PID/fd/ I execute echo text > 0 Result: text appears in the terminal where the program is run. Note, not I've just read text , but simply text . What am I doing wrong and what should I do to get this

Pipe function in Linux shell write in C

早过忘川 提交于 2021-01-28 04:08:08
问题 My mini-shell program accepts pipe command, for example, ls -l | wc -l and uses excevp to execute these commands. My problem is if there is no fork() for execvp, the pipe command works well but the shell terminates afterward. If there is a fork() for execvp, dead loop happens. And I cannot fix it. code: void run_pipe(char **args){ int ps[2]; pipe(ps); pid_t pid = fork(); pid_t child_pid; int child_status; if(pid == 0){ // child process close(1); close(ps[0]); dup2(ps[1], 1); //e.g. cmd[0] =