I have a shell script which
To make things run in parallel you use '&' at the end of a shell command to run it in the background, then wait
will by default (i.e. without arguments) wait until all background processes are finished. So, maybe kick off 10 in parallel, then wait, then do another ten. You can do this easily with two nested loops.
There is a simple, portable program that does just this for you: PPSS. PPSS automatically schedules jobs for you, by checking how many cores are available and launching another job every time another one just finished.
There is a whole list of programs that can run jobs in parallel from a shell, which even includes comparisons between them, in the documentation for GNU parallel. There are many, many solutions out there. Another good news is that they are probably quite efficient at scheduling jobs so that all the cores/processors are kept busy at all times.
generating random numbers is easy. suppose u got a huge file like a shop database and u want to rewrite that file on some specific basis. My idea was to calculate number of cores, split file into how many cores, make a script.cfg file , split.sh and recombine.sh split.sh will split file in how many cores, clone script.cfg ( script that changes stuff in that huge files), clone script.cgf in how many cores, make them executable, search and replace in clones some variables that have to know what part of the file to process and run them in background when a clone is done generate a clone$core.ok file, so when all clones are done will tell to a loop to recombine partial results into a single one only when all .ok files are generated. it can be done with " wait" but i fancy my way
http://www.linux-romania.com/product.php?id_product=76 look at the bottom ,is partially translated in EN in this way i can procces 20000 articles with 16 columns in 2 minutes(quad core) instead of 8(single core) You have to care about CPU temperature, coz all cores are running at 100%
IDLE_CPU=1
NCPU=$(nproc)
int_childs() {
trap - INT
while IFS=$'\n' read -r pid; do
kill -s SIGINT -$pid
done < <(jobs -p -r)
kill -s SIGINT -$$
}
# cmds is array that hold commands
# the complex thing is display which will handle all cmd output
# and serialized it correctly
trap int_childs INT
{
exec 2>&1
set -m
if [ $NCPU -gt $IDLE_CPU ]; then
for cmd in "${cmds[@]}"; do
$cmd &
while [ $(jobs -pr |wc -l) -ge $((NCPU - IDLE_CPU)) ]; do
wait -n
done
done
wait
else
for cmd in "${cmds[@]}"; do
$cmd
done
fi
} | display
Another very handy way to do this is with gnu parallel, which is well worth installing if you don't already have it; this is invaluable if the tasks don't necessarily take the same amount of time.
seq 1000 | parallel -j 8 --workdir $PWD ./myrun {}
will launch ./myrun 1
, ./myrun 2
, etc, making sure 8 jobs at a time are running. It can also take lists of nodes if you want to run on several nodes at once, eg in a PBS job; our instructions to our users for how to do that on our system are here.
Updated to add: You want to make sure you're using gnu-parallel, not the more limited utility of the same name that comes in the moreutils package (the divergent history of the two is described here.)