Is my spin lock implementation correct and optimal?

安稳与你 提交于 2019-11-28 16:05:43

So I'm wondering:

* Is it correct?

In the context mentioned, I would say yes.

* Is it optimal?

That's a loaded question. By reinventing the wheel you are also reinventing a lot of problems that have been solved by other implementations

  • I'd expect a waste loop on failure where you aren't trying to access the lock word.

  • Use of a full barrier in the unlock only needs to have release semantics (that's why you'd use __sync_lock_release, so that you'd get st1.rel on itanium instead of mf, or a lwsync on powerpc, ...). If you really only care about x86 or x86_64 the types of barriers used here or not don't matter as much (but if you where to make the jump to intel's itanium for an HP-IPF port then you wouldn't want this).

  • you don't have the pause() instruction that you'd normally put before your waste loop.

  • when there is contention you want something, semop, or even a dumb sleep in desperation. If you really need the performance that this buys you then the futex suggestion is probably a good one. If you need the performance this buys you bad enough to maintain this code you have a lot of research to do.

Note that there was a comment saying that the release barrier wasn't required. That isn't true even on x86 because the release barrier also serves as an instruction to the compiler to not shuffle other memory accesses around the "barrier". Very much like what you'd get if you used asm ("" ::: "memory" ).

* on compare and swap

On x86 the sync_lock_test_and_set will map to a xchg instruction which has an implied lock prefix. Definitely the most compact generated code (esp. if you use a byte for the "lock word" instead of an int), but no less correct than if you used LOCK CMPXCHG. Use of compare and swap can be used for fancier algorthims (like putting a non-zero pointer to metadata for the first "waiter" into the lockword on failure).

Looks fine to me. Btw, here is the textbook implementation that is more efficient even in the contended case.

void lock(volatile int *exclusion)
{
    while (__sync_lock_test_and_set(exclusion, 1))
        while (*exclusion)
            ;
}

In response to your questions:

  1. Looks ok to me
  2. Assuming the OS supports GCC (and GCC has the functions implemented); this should work on all x86 Operating Systems. The GCC documentation suggests that a warning will be produced if they are not supported on a given platform.
  3. There's nothing x86-64 specific here, so I don't see why not. This can be expanded to cover any architecture that GCC supports, however there maybe more optimal ways of achieving this on non x86 architectures.
  4. You might be slightly better off with using __sync_lock_release() in the unlock() case; as this will decrement the lock and add a memory barrier in a single operation. However, assuming that your assertion that there will rarely be contention; it looks good to me.

If you're on a recent version of Linux, you may be able to use a futex -- a "fast userspace mutex":

A properly programmed futex-based lock will not use system calls except when the lock is contended

In the uncontested case, which you're trying to optimize for with your spinlock, the futex will behave just like a spinlock, without requiring a kernel syscall. If the lock is contested, the waiting takes place in the kernel without busy-waiting.

Alex Raybosh

I wonder if the following CAS implementation is the correct one on x86_64. It is almost twice faster on my i7 X920 laptop (fedora 13 x86_64, gcc 4.4.5).

inline void lock(volatile int *locked) {
    while (__sync_val_compare_and_swap(locked, 0, 1));
    asm volatile("lfence" ::: "memory");
}
inline void unlock(volatile int *locked) {
    *locked=0;
    asm volatile("sfence" ::: "memory");
}

I can't comment on correctness, but the title of your question raised a red flag before I even read the question body. Synchronization primitives are devilishly hard to ensure correctness... if at all possible, you're better off using a well-designed/maintained library, perhaps pthreads or boost::thread.

One improvement is suggest is using TATAS (test-and-test-and-set). Using CAS operations are considered quite expensive for the processor, so it's better to avoid them if possible. Another thing, make sure you won't suffer from priority inversion (what if a thread with a high priority tries to acquire the lock while a thread with low priority tries to free the lock? On Windows for example this issue will ultimately by solved by the scheduler using a priority boost, but you can explicitly give up your thread's time slice in case you didn't succeed in acquiring the lock in you last 20 tries (for example..)

Your unlock procedure doesn't need the memory barrier; the assignment to exclusion is atomic as long as it dword aligned on the x86.

In the specific case of x86 (32/64) I don't think you need a memory fence at all in the unlock code. x86 doesn't do any reordering, except that stores are first put in a store buffer and so them becoming visible can be delayed for other threads. And a thread that does a store and then reads from the same variable will read from its store buffer if it has not yet been flushed to memory. So all you need is an asm statement to prevent compiler reorderings. You run the risk of one thread holding the lock slightly longer than necessary from the perspective of other threads, but if you don't care about contention that shouldn't matter. In fact, pthread_spin_unlock is implemented like that on my system (linux x86_64).

My system also implements pthread_spin_lock using lock decl lockvar; jne spinloop; instead of using xchg (which is what __sync_lock_test_and_set uses), but I don't know if there's actually a performance difference.

There are a few wrong assumptions.

First, SpinLock makes sense only if ressource is locked on another CPU. If ressource is locked on same CPU (which is always the case on uniprocessor systems), you need to relax scheduler in order unlock ressource. You current code will work on uniprocessor system because scheduler will switch tasks automaticaly, but it a waste of ressource.

On multi-processor system, same thing can happends, but task may migrate from one CPU to another. In short, use of spin lock is correct if you garantee that your tasks will run on different CPU.

Secondly, locking a mutex IS fast (as fast as spinlock) when is is unlocked. Mutexes locking (and unlocking) is slow (very slow) only if mutex is already locked.

So, in your case, I suggest to use mutexes.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!