job-control

(Hadoop) MapReduce - Chain jobs - JobControl doesn't stop

…衆ロ難τιáo~ 提交于 2019-11-29 04:14:48
问题 I need to chain two MapReduce jobs. I used JobControl to set job2 as dependent of job1. It works, output files are created!! But it doesn't stop! In the shell it remains in this state: 12/09/11 19:06:24 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 12/09/11 19:06:25 INFO input.FileInputFormat: Total input paths to process : 1 12/09/11 19:06:25 INFO util.NativeCodeLoader: Loaded the native-hadoop library 12/09/11 19

run a shell script and immediately background it, however keep the ability to inspect its output

孤街醉人 提交于 2019-11-28 08:23:56
How can I run a shell script and immediately background it, however keep the ability to inspect its output any time by tailing /tmp/output.txt It would be nice if I can foreground the process too later. PS It would be really cool if you can also show me how to "send" the backgrounded process in to a gnu screen that may or may not have been initialized. To 'background' a process when you start it Simply add an ampersand ( & ) after the command. If the program writes to standard out, it will still write to your console/terminal. To foreground the process, simply use the fg command. (You can see

bg / fg inside a command line loop

試著忘記壹切 提交于 2019-11-28 00:44:34
问题 ctrl-z (^z) acts in ways I do not understand when done inside a loop executed from a terminal. Say I type for ii in {0..100}; do echo $ii; sleep 1; done then I hit ^z. I'll get: [1]+ Stopped sleep 1 I can resume the job using fg or bg, but the job refers only to he sleep command. The rest of the loop has apparently disappeared, and no more number appear on the terminal. I could use & after the command to immediately run it in the background, or another solution is to wrap the whole thing in a

Wait for bash background jobs in script to be finished

ぃ、小莉子 提交于 2019-11-27 20:21:55
To maximize CPU usage (I run things on a Debian Lenny in EC2) I have a simple script to launch jobs in parallel: #!/bin/bash for i in apache-200901*.log; do echo "Processing $i ..."; do_something_important; done & for i in apache-200902*.log; do echo "Processing $i ..."; do_something_important; done & for i in apache-200903*.log; do echo "Processing $i ..."; do_something_important; done & for i in apache-200904*.log; do echo "Processing $i ..."; do_something_important; done & ... I'm quite satisfied with this working solution, however I couldn't figure out how to write further code which only

linux: kill background task

孤人 提交于 2019-11-27 10:01:58
How do I kill the last spawned background task in linux? Example: doSomething doAnotherThing doB & doC doD #kill doB ???? There's a special variable for this in bash: kill $! $! expands to the PID of the last process executed in the background. You can kill by job number. When you put a task in the background you'll see something like: $ ./script & [1] 35341 That [1] is the job number and can be referenced like: $ kill %1 $ kill %% # Most recent background job To see a list of job numbers use the jobs command. More from man bash : There are a number of ways to refer to a job in the shell. The

run a shell script and immediately background it, however keep the ability to inspect its output

半世苍凉 提交于 2019-11-27 01:50:05
问题 How can I run a shell script and immediately background it, however keep the ability to inspect its output any time by tailing /tmp/output.txt It would be nice if I can foreground the process too later. PS It would be really cool if you can also show me how to "send" the backgrounded process in to a gnu screen that may or may not have been initialized. 回答1: To 'background' a process when you start it Simply add an ampersand ( & ) after the command. If the program writes to standard out, it

Why can't I use job control in a bash script?

ぐ巨炮叔叔 提交于 2019-11-26 17:34:00
In this answer to another question , I was told that in scripts you don't have job control (and trying to turn it on is stupid) This is the first time I've heard this, and I've pored over the bash.info section on Job Control (chapter 7), finding no mention of either of these assertions. [ Update: The man page is a little better, mentioning 'typical' use, default settings, and terminal I/O, but no real reason why job control is particularly ill-advised for scripts.] So why doesn't script-based job-control work, and what makes it a bad practice (aka 'stupid')? Edit: The script in question starts

In what order should I send signals to gracefully shutdown processes?

試著忘記壹切 提交于 2019-11-26 17:06:58
In a comment on this answer of another question , the commenter says: don’t use kill -9 unless absolutely necessary! SIGKILL can’t be trapped so the killed program can’t run any shutdown routines to e.g. erase temporary files. First try HUP (1), then INT (2), then QUIT (3) I agree in principle about SIGKILL , but the rest is news to me. Given that the default signal sent by kill is SIGTERM , I would expect it is the most-commonly expected signal for graceful shutdown of an arbitrary process. Also, I have seen SIGHUP used for non-terminating reasons, such as telling a daemon "re-read your

linux: kill background task

放肆的年华 提交于 2019-11-26 14:59:24
问题 How do I kill the last spawned background task in linux? Example: doSomething doAnotherThing doB & doC doD #kill doB ???? 回答1: There's a special variable for this in bash: kill $! $! expands to the PID of the last process executed in the background. 回答2: You can kill by job number. When you put a task in the background you'll see something like: $ ./script & [1] 35341 That [1] is the job number and can be referenced like: $ kill %1 $ kill %% # Most recent background job To see a list of job

Why can't I use job control in a bash script?

半城伤御伤魂 提交于 2019-11-26 05:29:40
问题 In this answer to another question, I was told that in scripts you don\'t have job control (and trying to turn it on is stupid) This is the first time I\'ve heard this, and I\'ve pored over the bash.info section on Job Control (chapter 7), finding no mention of either of these assertions. [ Update: The man page is a little better, mentioning \'typical\' use, default settings, and terminal I/O, but no real reason why job control is particularly ill-advised for scripts.] So why doesn\'t script