I am writing a script in shell in which a command is running and taking 2 min. everytime. Also, there is nothing we can do with this. But if i want to run this command 100 t
GNU Parallel is what you want, unless you want to reinvent the wheel. Here are some more detailed examples, but the short of it:
ls | parallel gzip # gzip all files in a directory
As @KingsIndian said, you can background tasks, which sort of lets them run in parallel. Beyond this, you can also keep track of them by process ID:
#!/bin/bash
# Function to be backgrounded
track() {
sleep $1
printf "\nFinished: %d\n" "$1"
}
start=$(date '+%s')
rand3="$(jot -s\ -r 3 5 10)"
# If you don't have `jot` (*BSD/OSX), substitute your own numbers here.
#rand3="5 8 10"
echo "Random numbers: $rand3"
# Make an associative array in which you'll record pids.
declare -A pids
# Background an instance of the track() function for each number, record the pid.
for n in $rand3; do
track $n &
pid=$!
echo "Backgrounded: $n (pid=$pid)"
pids[$pid]=$n
done
# Watch your stable of backgrounded processes.
# If a pid goes away, remove it from the array.
while [ -n "${pids[*]}" ]; do
sleep 1
for pid in "${!pids[@]}"; do
if ! ps "$pid" >/dev/null; then
unset pids[$pid]
echo "unset: $pid"
fi
done
if [ -z "${!pids[*]}" ]; then
break
fi
printf "\rStill waiting for: %s ... " "${pids[*]}"
done
printf "\r%-25s \n" "Done."
printf "Total runtime: %d seconds\n" "$((`date '+%s'` - $start))"
You should also take a look at the Bash documentation on coprocesses.
... run all 100 commands parallely so that output will come in 2min
This is only possible if you have 200 processors on your system.
There's no such utility/command in shell script to run commands in parallel. What you can do is run your command in background:
for ((i=0;i<200;i++))
do
MyCommand &
done
With &
(background), each execution is scheduled as soon as possible. But this doesn't guarantee that your code will be executed in less 200 min. It depends how many processors are there on your system.
If you have only one processor and each execution of the command (that takes 2min) is doing some computation for 2 min, then processor is doing some work, meaning there's no cycles wasted. In this case, running the commands in parallel is not going help because, there's only one processor which is also not free. So, the processes will be just waiting for their turn to be executed.
If you have more than one processors, then the above method (for loop) might help in reducing the total execution time.