Locking a file in Python

后端 未结 13 2317
悲&欢浪女
悲&欢浪女 2020-11-22 03:00

I need to lock a file for writing in Python. It will be accessed from multiple Python processes at once. I have found some solutions online, but most fail for my purposes as

相关标签:
13条回答
  • 2020-11-22 03:56

    I have been looking at several solutions to do that and my choice has been oslo.concurrency

    It's powerful and relatively well documented. It's based on fasteners.

    Other solutions:

    • Portalocker: requires pywin32, which is an exe installation, so not possible via pip
    • fasteners: poorly documented
    • lockfile: deprecated
    • flufl.lock: NFS-safe file locking for POSIX systems.
    • simpleflock : Last update 2013-07
    • zc.lockfile : Last update 2016-06 (as of 2017-03)
    • lock_file : Last update in 2007-10
    0 讨论(0)
  • 2020-11-22 03:56

    Coordinating access to a single file at the OS level is fraught with all kinds of issues that you probably don't want to solve.

    Your best bet is have a separate process that coordinates read/write access to that file.

    0 讨论(0)
  • 2020-11-22 03:57

    Locking is platform and device specific, but generally, you have a few options:

    1. Use flock(), or equivalent (if your os supports it). This is advisory locking, unless you check for the lock, its ignored.
    2. Use a lock-copy-move-unlock methodology, where you copy the file, write the new data, then move it (move, not copy - move is an atomic operation in Linux -- check your OS), and you check for the existence of the lock file.
    3. Use a directory as a "lock". This is necessary if you're writing to NFS, since NFS doesn't support flock().
    4. There's also the possibility of using shared memory between the processes, but I've never tried that; it's very OS-specific.

    For all these methods, you'll have to use a spin-lock (retry-after-failure) technique for acquiring and testing the lock. This does leave a small window for mis-synchronization, but its generally small enough to not be an major issue.

    If you're looking for a solution that is cross platform, then you're better off logging to another system via some other mechanism (the next best thing is the NFS technique above).

    Note that sqlite is subject to the same constraints over NFS that normal files are, so you can't write to an sqlite database on a network share and get synchronization for free.

    0 讨论(0)
  • 2020-11-22 03:58

    I prefer lockfile — Platform-independent file locking

    0 讨论(0)
  • 2020-11-22 03:58

    I have been working on a situation like this where I run multiple copies of the same program from within the same directory/folder and logging errors. My approach was to write a "lock file" to the disc before opening the log file. The program checks for the presence of the "lock file" before proceeding, and waits for its turn if the "lock file" exists.

    Here is the code:

    def errlogger(error):
    
        while True:
            if not exists('errloglock'):
                lock = open('errloglock', 'w')
                if exists('errorlog'): log = open('errorlog', 'a')
                else: log = open('errorlog', 'w')
                log.write(str(datetime.utcnow())[0:-7] + ' ' + error + '\n')
                log.close()
                remove('errloglock')
                return
            else:
                check = stat('errloglock')
                if time() - check.st_ctime > 0.01: remove('errloglock')
                print('waiting my turn')
    

    EDIT--- After thinking over some of the comments about stale locks above I edited the code to add a check for staleness of the "lock file." Timing several thousand iterations of this function on my system gave and average of 0.002066... seconds from just before:

    lock = open('errloglock', 'w')
    

    to just after:

    remove('errloglock')
    

    so I figured I will start with 5 times that amount to indicate staleness and monitor the situation for problems.

    Also, as I was working with the timing, I realized that I had a bit of code that was not really necessary:

    lock.close()
    

    which I had immediately following the open statement, so I have removed it in this edit.

    0 讨论(0)
  • 2020-11-22 03:59

    The scenario is like that: The user requests a file to do something. Then, if the user sends the same request again, it informs the user that the second request is not done until the first request finishes. That's why, I use lock-mechanism to handle this issue.

    Here is my working code:

    from lockfile import LockFile
    lock = LockFile(lock_file_path)
    status = ""
    if not lock.is_locked():
        lock.acquire()
        status = lock.path + ' is locked.'
        print status
    else:
        status = lock.path + " is already locked."
        print status
    
    return status
    
    0 讨论(0)
提交回复
热议问题