What\'s a quick-and-dirty way to make sure that only one instance of a shell script is running at a given time?
To make locking reliable you need an atomic operation. Many of the above proposals are not atomic. The proposed lockfile(1) utility looks promising as the man-page mentioned, that its "NFS-resistant". If your OS does not support lockfile(1) and your solution has to work on NFS, you have not many options....
NFSv2 has two atomic operations:
With NFSv3 the create call is also atomic.
Directory operations are NOT atomic under NFSv2 and NFSv3 (please refer to the book 'NFS Illustrated' by Brent Callaghan, ISBN 0-201-32570-5; Brent is a NFS-veteran at Sun).
Knowing this, you can implement spin-locks for files and directories (in shell, not PHP):
lock current dir:
while ! ln -s . lock; do :; done
lock a file:
while ! ln -s ${f} ${f}.lock; do :; done
unlock current dir (assumption, the running process really acquired the lock):
mv lock deleteme && rm deleteme
unlock a file (assumption, the running process really acquired the lock):
mv ${f}.lock ${f}.deleteme && rm ${f}.deleteme
Remove is also not atomic, therefore first the rename (which is atomic) and then the remove.
For the symlink and rename calls, both filenames have to reside on the same filesystem. My proposal: use only simple filenames (no paths) and put file and lock into the same directory.
Create a lock file in a known location and check for existence on script start? Putting the PID in the file might be helpful if someone's attempting to track down an errant instance that's preventing execution of the script.
Actually although the answer of bmdhacks is almost good, there is a slight chance the second script to run after first checked the lockfile and before it wrote it. So they both will write the lock file and they will both be running. Here is how to make it work for sure:
lockfile=/var/lock/myscript.lock
if ( set -o noclobber; echo "$$" > "$lockfile") 2> /dev/null ; then
trap 'rm -f "$lockfile"; exit $?' INT TERM EXIT
else
# or you can decide to skip the "else" part if you want
echo "Another instance is already running!"
fi
The noclobber
option will make sure that redirect command will fail if file already exists. So the redirect command is actually atomic - you write and check the file with one command. You don't need to remove the lockfile at the end of file - it'll be removed by the trap. I hope this helps to people that will read it later.
P.S. I didn't see that Mikel already answered the question correctly, although he didn't include the trap command to reduce the chance the lock file will be left over after stopping the script with Ctrl-C for example. So this is the complete solution
Use flock(1) to make an exclusive scoped lock a on file descriptor. This way you can even synchronize different parts of the script.
#!/bin/bash
(
# Wait for lock on /var/lock/.myscript.exclusivelock (fd 200) for 10 seconds
flock -x -w 10 200 || exit 1
# Do stuff
) 200>/var/lock/.myscript.exclusivelock
This ensures that code between (
and )
is run only by one process at a time and that the process doesn’t wait too long for a lock.
Caveat: this particular command is a part of util-linux. If you run an operating system other than Linux, it may or may not be available.
You can use GNU Parallel
for this as it works as a mutex when called as sem
. So, in concrete terms, you can use:
sem --id SCRIPTSINGLETON yourScript
If you want a timeout too, use:
sem --id SCRIPTSINGLETON --semaphoretimeout -10 yourScript
Timeout of <0 means exit without running script if semaphore is not released within the timeout, timeout of >0 mean run the script anyway.
Note that you should give it a name (with --id
) else it defaults to the controlling terminal.
GNU Parallel
is a very simple install on most Linux/OSX/Unix platforms - it is just a Perl script.
I find that bmdhack's solution is the most practical, at least for my use case. Using flock and lockfile rely on removing the lockfile using rm when the script terminates, which can't always be guaranteed (e.g., kill -9).
I would change one minor thing about bmdhack's solution: It makes a point of removing the lock file, without stating that this is unnecessary for the safe working of this semaphore. His use of kill -0 ensures that an old lockfile for a dead process will simply be ignored/over-written.
My simplified solution is therefore to simply add the following to the top of your singleton:
## Test the lock
LOCKFILE=/tmp/singleton.lock
if [ -e ${LOCKFILE} ] && kill -0 `cat ${LOCKFILE}`; then
echo "Script already running. bye!"
exit
fi
## Set the lock
echo $$ > ${LOCKFILE}
Of course, this script still has the flaw that processes that are likely to start at the same time have a race hazard, as the lock test and set operations are not a single atomic action. But the proposed solution for this by lhunath to use mkdir has the flaw that a killed script may leave behind the directory, thus preventing other instances from running.