pipe

Better multithreaded use of Python subprocess.Popen & communicate()?

好久不见. 提交于 2020-01-01 17:30:19
问题 I'm running multiple commands which may take some time, in parallel, on a Linux machine running Python 2.6. So, I used subprocess.Popen class and process.communicate() method to parallelize execution of mulitple command groups and capture the output at once after execution. def run_commands(commands, print_lock): # this part runs in parallel. outputs = [] for command in commands: proc = subprocess.Popen(shlex.split(command), stdout=subprocess.PIPE, stderr=subprocess.STDOUT, close_fds=True)

How to loop through only active file descriptors from fd_set result from select()?

那年仲夏 提交于 2020-01-01 10:03:53
问题 So in my current server implementation, it is currently something like this: void loop(){ // step 1: clear set fd_set readfds; while(true){ // step 1: FD_ZERO(readfds); // step 2: loop_through_sockets_and_add_active_sockets_to(theset); // step 3: switch(select(FD_SETSIZE, &readfds, 0, 0, &tv)) { case SOCKET_ERROR: patia->receiveEvent(Error, net::getError()); return; case 0: return; } // step 4: loop through sockets and check, using FD_ISSET, which read fd's have incoming data. } } Now, not

Python equivalent of piping file output to gzip in Perl using a pipe

℡╲_俬逩灬. 提交于 2020-01-01 08:50:38
问题 I need to figure out how to write file output to a compressed file in Python, similar to the two-liner below: open ZIPPED, "| gzip -c > zipped.gz"; print ZIPPED "Hello world\n"; In Perl, this uses Unix gzip to compress whatever you print to the ZIPPED filehandle to the file "zipped.gz". I know how to use "import gzip" to do this in Python like this: import gzip zipped = gzip.open("zipped.gz", 'wb') zipped.write("Hello world\n") However, that is extremely slow. According to the profiler, using

I/O redirection from child process using pipes - winapi

笑着哭i 提交于 2020-01-01 07:01:07
问题 I'm working with an application that offers an api so that scripting it is easier. Basically, when you write valid input, it outputs an answer. I would like to use that output to sends more input, e.g.: Input: <nodes> Output: 1, 56, 23 Input <56> Output: "Apple" What I'd like to do is a program that writes to the target process STDIN, then reads the output from it's STDOUT. To do that, I mostly took the code from there: Creating a Child Process with Redirected Input and Output (Windows) -

The usage of pipe in AWK

南笙酒味 提交于 2020-01-01 06:13:23
问题 What I want is to get the reversed string of current line, I tried to use the rev command in the AWK but cannot get the current result. $ cat myfile.txt abcde $ cat myfile.txt | awk '{cmd="echo "$0"|rev"; cmd | getline result; print "result="$result; close(cmd);}' abcde I want to get edcba in the output. I know there are some other ways to get the reversed string like $ cat myfile.txt | exec 'rev' . Using AWK here is because there are some other processes to do. Did I miss anything? 回答1: The

The usage of pipe in AWK

雨燕双飞 提交于 2020-01-01 06:13:19
问题 What I want is to get the reversed string of current line, I tried to use the rev command in the AWK but cannot get the current result. $ cat myfile.txt abcde $ cat myfile.txt | awk '{cmd="echo "$0"|rev"; cmd | getline result; print "result="$result; close(cmd);}' abcde I want to get edcba in the output. I know there are some other ways to get the reversed string like $ cat myfile.txt | exec 'rev' . Using AWK here is because there are some other processes to do. Did I miss anything? 回答1: The

Is there a way to improve performance of linux pipes?

跟風遠走 提交于 2020-01-01 04:43:37
问题 I'm trying to pipe extremely high speed data from one application to another using 64-bit CentOS6. I have done the following benchmarks using dd to discover that the pipes are holding me back and not the algorithm in my program. My goal is to achieve somewhere around 1.5 GB/s. First, without pipes: dd if=/dev/zero of=/dev/null bs=8M count=1000 1000+0 records in 1000+0 records out 8388608000 bytes (8.4 GB) copied, 0.41925 s, 20.0 GB/s Next, a pipe between two dd processes: dd if=/dev/zero bs

Python os.pipe vs multiprocessing.Pipe

送分小仙女□ 提交于 2020-01-01 04:31:05
问题 Recently I'm studying parallel programming tools in Python. And here are two major differences between os.pipe and multiprocessing.Pipe.(despite the occasion they are used) os.pipe is unidirectional , multiprocessing.Pipe is bidirectional ; When putting things into pipe/receive things from pipe, os.pipe uses encode/decode , while multiprocessing.Pipe uses pickle/unpickle I want to know if my understanding is correct, and is there other difference? Thank you. 回答1: I believe everything you've

Linux: Checking if a socket/pipe is broken without doing a read()/write()

霸气de小男生 提交于 2020-01-01 03:09:10
问题 I have a simple piece of code that periodically writes data to a fd that's passed to it. The fd will most likely be a pipe or socket but could potentially be anything. I can detect when the socket/pipe is closed/broken whenever I write() to it, since I get an EPIPE error (I'm ignoring SIGPIPE). But I don't write to it all the time, and so might not detect a closed socket for a long time. I need to react to the closure asap. Is there a method of checking the fd without having to do a write()?

xcodebuild corrupts test result output when output redirected to file

非 Y 不嫁゛ 提交于 2019-12-31 22:27:54
问题 I have Jenkins with the Xcode plugin configured to run unit tests by adding the test build action to the Custom xcodebuild arguments setting. For more information on getting Jenkins to run the unit tests at all with Xcode 5, see this question. Now that I have it running, it seems to mix console output from NSLog statements or the final ** TEST SUCCEEDED ** message with the test results, thus occasionally tripping up the parser that converts unit test results to the JUnit format required for