Fair critical section (Linux)

后端 未结 5 1622
没有蜡笔的小新
没有蜡笔的小新 2020-12-15 13:47

On a multi-threaded Linux application I use a mutex for critical sections. This works very well except for the fairness issue. It can happen that a thread leaving a critical

相关标签:
5条回答
  • 2020-12-15 14:24

    If your claim holds true (I haven't got the time to read up, and it would appear as though you have researched this before posting the question), I suggest

     sleep(0);
    

    to explicitely yield in between critical sections.

    while(true)
    {
        critsect.enter();
        ... do calculations ...
        ... maybe call a blocking operation so we sleep ...
        critsect.leave();
        sleep(0);
    }
    
    0 讨论(0)
  • 2020-12-15 14:26

    OK, how about this:

    while(true)
    {
        sema.wait;
        critsect.enter();
        sema.post;
        ... do calculations ...
        ... maybe call a blocking operation so we sleep ...
        critsect.leave();
    }
    

    Init. the semaphore count to 1. Have other threads wait on the semaphore as well before trying to get the CS and signal it when done. If the 'calculate' thread gets the sema, it can get to the CS and lock it. Once inside the lock, but before the long claculate, the sema is signaled and one other thread can then reach the CS but not get inside it. When the 'calculate' thread exits the lock, it cannot loop round and relock it because the sema. count is zero, so the other thread gets the lock. The 'calculate' thread has to wait on the sema until the other thread that got in has finished with its access and signals the sema.

    In this way, another thread can 'reserve' access to the data even though it cannot actually get at it yet.

    Rgds, Martin

    0 讨论(0)
  • 2020-12-15 14:28

    Even with a fair critical section, the code is probably going to have horrible performance, because if the critical section is being held for long periods of time, threads will be often waiting for it.

    So I'd suggest you try to restructure the code so it does not need to lock critical sections over extended periods of time. Either by using different approach altogether (passing objects over message queue is often recommended, because it's easy to get right) or at least by doing most of the calculation on local variables without holding the lock and than only taking the lock to store the results. If the lock is held for shorter periods of time, the threads will spend less time waiting for it, which will generally improve performance and make fairness a non-issue. You can also try to increase lock granularity (lock smaller objects separately), which will also reduce the contention.

    Edit: Ok, thinking about it, I believe every critical section in Linux is approximately fair. Whenever there are sleepers, the unlock operation has to enter kernel to tell it to wake them up. During return from kernel, scheduler runs and picks the process with highest priority. The sleepers rise in priority when waiting, so at some point they will be high enough that the release will cause a task swtich.

    0 讨论(0)
  • 2020-12-15 14:32

    IMHO you can use a FIFO SCHEDULER on Linux and change de priority of threads:

    thread_func() {
        ... 
        pthread_t t_id = pthread_self();
        struct sched_param prio_zero, prio_one;
        prio_zero.sched_priority = sched_get_priority_min(SCHED_FIFO);
        prio_one.sched_priority = sched_get_priority_min(SCHED_FIFO) + 1;
        phtread_setschedparam(t_id, SCHED_FIFO, &prio_zero);
        ...
        while(true)
        {
            ... Doing something before
            phtread_setschedparam(t_id, SCHED_FIFO, &prio_one);
            critsect.enter();
            ... do calculations ...
            ... maybe call a blocking operation so we sleep ...
            critsect.leave();
            phtread_setschedparam(t_id, SCHED_FIFO, &prio_zero);
            ... Do something after
        }
    }
    
    0 讨论(0)
  • 2020-12-15 14:37

    You can build a FIFO "ticket lock" on top of pthreads mutexes, along these lines:

    #include <pthread.h>
    
    typedef struct ticket_lock {
        pthread_cond_t cond;
        pthread_mutex_t mutex;
        unsigned long queue_head, queue_tail;
    } ticket_lock_t;
    
    #define TICKET_LOCK_INITIALIZER { PTHREAD_COND_INITIALIZER, PTHREAD_MUTEX_INITIALIZER }
    
    void ticket_lock(ticket_lock_t *ticket)
    {
        unsigned long queue_me;
    
        pthread_mutex_lock(&ticket->mutex);
        queue_me = ticket->queue_tail++;
        while (queue_me != ticket->queue_head)
        {
            pthread_cond_wait(&ticket->cond, &ticket->mutex);
        }
        pthread_mutex_unlock(&ticket->mutex);
    }
    
    void ticket_unlock(ticket_lock_t *ticket)
    {
        pthread_mutex_lock(&ticket->mutex);
        ticket->queue_head++;
        pthread_cond_broadcast(&ticket->cond);
        pthread_mutex_unlock(&ticket->mutex);
    }
    

    Under this kind of scheme, no low-level pthreads mutex is held while a thread is within the ticketlock protected critical section, allowing other threads to join the queue.

    0 讨论(0)
提交回复
热议问题