Preventing lock propagation

南楼画角 提交于 2019-12-23 07:48:58

问题


A simple and seemingly reliable way to do locking under bash is:

exec 9>>lockfile
flock 9

However, bash notoriously propagates such a fd lock to all forked stuff including executed programs etc.

Is there any way to tell bash not to duplicate the fd? It's great that the lock is attached to a fd which gets removed when the program terminates, no matter how it gets terminated.

I know I can do stuff like:

run_some_prog 9>&-

But that is quite tedious.

Is there any better solution?


回答1:


You can use the -o command line option to flock(1) (long option --close, which might be better for writing in scripts for the self-documenting nature) to specify that the file descriptor should be closed before executing commands via flock(1):

   -o, --close
          Close the file descriptor on which the lock is held
          before executing command.  This is useful if command
          spawns a child process which should not be holding
          the lock.



回答2:


Apparently flock -o FD does not fix the problem. A trick to get rid of the superfluous FD for later commands in the same shell script is to wrap the remaining part into a section which closes the FD, like this:

var=outside

exec 9>>lockfile
flock -n 9 || exit
{

: commands which do not see FD9

var=exported
# exit would exit script

# see CLUMSY below outside this code snippet
} 9<&-
# $var is "exported"

# drop lock closing the FD
exec 9<&-

: remaining commands without lock

This is a bit CLUMSY, because the close of the FD is so far separated from the lock.

You can refactor this loosing the "natural" command flow but keeping things together which belong together:

functions_running_with_lock()
{
: commands which do not see FD9

var=exported
# exit would exit script
}

var=outside

exec 9>>lockfile
flock -n 9 || exit

functions_running_with_lock 9<&-

# $var is "exported"

# drop lock closing the FD
exec 9<&-

: remaining commands without lock

A little nicer writing which keeps the natural command flow at the expense of another fork plus an additional process and a bit different workflow, which often comes handy. But this does not allow to set variables in the outer shell:

var=outside

exec 9>>lockfile
flock -n 9 || exit
(
exec 9<&-

: commands which do not see FD9

var=exported
# exit does not interrupt the whole script
exit
var=neverreached
)
# optionally test the ret if the parentheses using $?

# $var is "outside" again

# drop lock closing the FD
exec 9<&-

: remaining commands without lock

BTW, if you really want to be sure that bash does not introduce additional file descriptors (to "hide" the closed FD and skip a real fork), for example if you execute some deamon which then would hold the lock forever, the latter variant is recommended, just to be sure. lsof -nP and strace your_script are your friends.




回答3:


There is no way to mark a FD as close-on-exec within bash, so no, there is no better solution.




回答4:


-o doesn't work with file descriptors, it only works with files. You have to use -u to unlock the file descriptor.

What I do is this:

# start of the lock sections
LOCKFILE=/tmp/somelockfile
exec 8>"$LOCKFILE"
if ! flock -n 8;then
      echo Rejected  # for testing, remove this later
      exit           # exit, since we don't have lock
fi

# some code which shouldn't run in parallel

# End of lock section
flock -u 8
rm -f "$LOCKFILE"

This way the file descriptor will be closed by the process that made the lock, and since every other process will exit, that means only the process holding the lock will unlock the file descriptor and remove the lock file.



来源:https://stackoverflow.com/questions/8866175/preventing-lock-propagation

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!