Quick-and-dirty way to ensure only one instance of a shell script is running at a time

前端 未结 30 2416
忘掉有多难
忘掉有多难 2020-11-22 02:57

What\'s a quick-and-dirty way to make sure that only one instance of a shell script is running at a given time?

相关标签:
30条回答
  • 2020-11-22 03:27

    When targeting a Debian machine I find the lockfile-progs package to be a good solution. procmail also comes with a lockfile tool. However sometimes I am stuck with neither of these.

    Here's my solution which uses mkdir for atomic-ness and a PID file to detect stale locks. This code is currently in production on a Cygwin setup and works well.

    To use it simply call exclusive_lock_require when you need get exclusive access to something. An optional lock name parameter lets you share locks between different scripts. There's also two lower level functions (exclusive_lock_try and exclusive_lock_retry) should you need something more complex.

    function exclusive_lock_try() # [lockname]
    {
    
        local LOCK_NAME="${1:-`basename $0`}"
    
        LOCK_DIR="/tmp/.${LOCK_NAME}.lock"
        local LOCK_PID_FILE="${LOCK_DIR}/${LOCK_NAME}.pid"
    
        if [ -e "$LOCK_DIR" ]
        then
            local LOCK_PID="`cat "$LOCK_PID_FILE" 2> /dev/null`"
            if [ ! -z "$LOCK_PID" ] && kill -0 "$LOCK_PID" 2> /dev/null
            then
                # locked by non-dead process
                echo "\"$LOCK_NAME\" lock currently held by PID $LOCK_PID"
                return 1
            else
                # orphaned lock, take it over
                ( echo $$ > "$LOCK_PID_FILE" ) 2> /dev/null && local LOCK_PID="$$"
            fi
        fi
        if [ "`trap -p EXIT`" != "" ]
        then
            # already have an EXIT trap
            echo "Cannot get lock, already have an EXIT trap"
            return 1
        fi
        if [ "$LOCK_PID" != "$$" ] &&
            ! ( umask 077 && mkdir "$LOCK_DIR" && umask 177 && echo $$ > "$LOCK_PID_FILE" ) 2> /dev/null
        then
            local LOCK_PID="`cat "$LOCK_PID_FILE" 2> /dev/null`"
            # unable to acquire lock, new process got in first
            echo "\"$LOCK_NAME\" lock currently held by PID $LOCK_PID"
            return 1
        fi
        trap "/bin/rm -rf \"$LOCK_DIR\"; exit;" EXIT
    
        return 0 # got lock
    
    }
    
    function exclusive_lock_retry() # [lockname] [retries] [delay]
    {
    
        local LOCK_NAME="$1"
        local MAX_TRIES="${2:-5}"
        local DELAY="${3:-2}"
    
        local TRIES=0
        local LOCK_RETVAL
    
        while [ "$TRIES" -lt "$MAX_TRIES" ]
        do
    
            if [ "$TRIES" -gt 0 ]
            then
                sleep "$DELAY"
            fi
            local TRIES=$(( $TRIES + 1 ))
    
            if [ "$TRIES" -lt "$MAX_TRIES" ]
            then
                exclusive_lock_try "$LOCK_NAME" > /dev/null
            else
                exclusive_lock_try "$LOCK_NAME"
            fi
            LOCK_RETVAL="${PIPESTATUS[0]}"
    
            if [ "$LOCK_RETVAL" -eq 0 ]
            then
                return 0
            fi
    
        done
    
        return "$LOCK_RETVAL"
    
    }
    
    function exclusive_lock_require() # [lockname] [retries] [delay]
    {
        if ! exclusive_lock_retry "$@"
        then
            exit 1
        fi
    }
    
    0 讨论(0)
  • 2020-11-22 03:27

    Add this line at the beginning of your script

    [ "${FLOCKER}" != "$0" ] && exec env FLOCKER="$0" flock -en "$0" "$0" "$@" || :
    

    It's a boilerplate code from man flock.

    If you want more logging, use this one

    [ "${FLOCKER}" != "$0" ] && { echo "Trying to start build from queue... "; exec bash -c "FLOCKER='$0' flock -E $E_LOCKED -en '$0' '$0' '$@' || if [ \"\$?\" -eq $E_LOCKED ]; then echo 'Locked.'; fi"; } || echo "Lock is free. Completing."
    

    This sets and checks locks using flock utility. This code detects if it was run first time by checking FLOCKER variable, if it is not set to script name, then it tries to start script again recursively using flock and with FLOCKER variable initialized, if FLOCKER is set correctly, then flock on previous iteration succeeded and it is OK to proceed. If lock is busy, it fails with configurable exit code.

    It seems to not work on Debian 7, but seems to work back again with experimental util-linux 2.25 package. It writes "flock: ... Text file busy". It could be overridden by disabling write permission on your script.

    0 讨论(0)
  • 2020-11-22 03:28

    Here's an implementation that uses a lockfile and echoes a PID into it. This serves as a protection if the process is killed before removing the pidfile:

    LOCKFILE=/tmp/lock.txt
    if [ -e ${LOCKFILE} ] && kill -0 `cat ${LOCKFILE}`; then
        echo "already running"
        exit
    fi
    
    # make sure the lockfile is removed when we exit and then claim it
    trap "rm -f ${LOCKFILE}; exit" INT TERM EXIT
    echo $$ > ${LOCKFILE}
    
    # do stuff
    sleep 1000
    
    rm -f ${LOCKFILE}
    

    The trick here is the kill -0 which doesn't deliver any signal but just checks if a process with the given PID exists. Also the call to trap will ensure that the lockfile is removed even when your process is killed (except kill -9).

    0 讨论(0)
  • 2020-11-22 03:29

    This example is explained in the man flock, but it needs some impovements, because we should manage bugs and exit codes:

       #!/bin/bash
       #set -e this is useful only for very stupid scripts because script fails when anything command exits with status more than 0 !! without possibility for capture exit codes. not all commands exits >0 are failed.
    
    ( #start subprocess
      # Wait for lock on /var/lock/.myscript.exclusivelock (fd 200) for 10 seconds
      flock -x -w 10 200
      if [ "$?" != "0" ]; then echo Cannot lock!; exit 1; fi
      echo $$>>/var/lock/.myscript.exclusivelock #for backward lockdir compatibility, notice this command is executed AFTER command bottom  ) 200>/var/lock/.myscript.exclusivelock.
      # Do stuff
      # you can properly manage exit codes with multiple command and process algorithm.
      # I suggest throw this all to external procedure than can properly handle exit X commands
    
    ) 200>/var/lock/.myscript.exclusivelock   #exit subprocess
    
    FLOCKEXIT=$?  #save exitcode status
        #do some finish commands
    
    exit $FLOCKEXIT   #return properly exitcode, may be usefull inside external scripts
    

    You can use another method, list processes that I used in the past. But this is more complicated that method above. You should list processes by ps, filter by its name, additional filter grep -v grep for remove parasite nad finally count it by grep -c . and compare with number. Its complicated and uncertain

    0 讨论(0)
  • 2020-11-22 03:30

    If flock's limitations, which have already been described elsewhere on this thread, aren't an issue for you, then this should work:

    #!/bin/bash
    
    {
        # exit if we are unable to obtain a lock; this would happen if 
        # the script is already running elsewhere
        # note: -x (exclusive) is the default
        flock -n 100 || exit
    
        # put commands to run here
        sleep 100
    } 100>/tmp/myjob.lock 
    
    0 讨论(0)
  • 2020-11-22 03:31

    The flock path is the way to go. Think about what happens when the script suddenly dies. In the flock-case you just loose the flock, but that is not a problem. Also, note that an evil trick is to take a flock on the script itself .. but that of course lets you run full-steam-ahead into permission problems.

    0 讨论(0)
提交回复
热议问题