What's the difference between deadlock and livelock?

后端 未结 7 1697
渐次进展
渐次进展 2021-01-29 17:20

Can somebody please explain with examples (of code) what is the difference between deadlock and livelock?

相关标签:
7条回答
  • 2021-01-29 17:33

    Taken from http://en.wikipedia.org/wiki/Deadlock:

    In concurrent computing, a deadlock is a state in which each member of a group of actions, is waiting for some other member to release a lock

    A livelock is similar to a deadlock, except that the states of the processes involved in the livelock constantly change with regard to one another, none progressing. Livelock is a special case of resource starvation; the general definition only states that a specific process is not progressing.

    A real-world example of livelock occurs when two people meet in a narrow corridor, and each tries to be polite by moving aside to let the other pass, but they end up swaying from side to side without making any progress because they both repeatedly move the same way at the same time.

    Livelock is a risk with some algorithms that detect and recover from deadlock. If more than one process takes action, the deadlock detection algorithm can be repeatedly triggered. This can be avoided by ensuring that only one process (chosen randomly or by priority) takes action.

    0 讨论(0)
  • 2021-01-29 17:38

    I just planned to share some knowledge.

    Deadlocks A set of threads/processes is deadlocked, if each thread/process in the set is waiting for an event that only another process in the set can cause.

    The important thing here is another process is also in the same set. that means another process also blocked and no one can proceed.

    Deadlocks occur when processes are granted exclusive access to resources.

    These four conditions should be satisfied to have a deadlock.

    1. Mutual exclusion condition (Each resource is assigned to 1 process)
    2. Hold and wait condition (Process holding resources and at the same time it can ask other resources).
    3. No preemption condition (Previously granted resources can not forcibly be taken away) #This condition depends on the application
    4. Circular wait condition (Must be a circular chain of 2 or more processes and each is waiting for resource held by the next member of the chain) # It will happen dynamically

    If we found these conditions then we can say there may be occurred a situation like a deadlock.

    LiveLock

    Each thread/process is repeating the same state again and again but doesn't progress further. Something similar to a deadlock since the process can not enter the critical section. However in a deadlock, processes are wait without doing anything but in livelock, the processes are trying to proceed but processes are repeated to the same state again and again.

    (In a deadlocked computation there is no possible execution sequence which succeeds. but In a livelocked computation, there are successful computations, but there are one or more execution sequences in which no process enters its critical section.)

    Difference from deadlock and livelock

    When deadlock happens, No execution will happen. but in livelock, some executions will happen but those executions are not enough to enter the critical section.

    0 讨论(0)
  • 2021-01-29 17:40

    Livelock

    A thread often acts in response to the action of another thread. If the other thread's action is also a response to the action of another thread, then livelock may result.

    As with deadlock, livelocked threads are unable to make further progress. However, the threads are not blocked — they are simply too busy responding to each other to resume work. This is comparable to two people attempting to pass each other in a corridor: Alphonse moves to his left to let Gaston pass, while Gaston moves to his right to let Alphonse pass. Seeing that they are still blocking each other, Alphonse moves to his right, while Gaston moves to his left. They're still blocking each other, and so on...

    The main difference between livelock and deadlock is that threads are not going to be blocked, instead they will try to respond to each other continuously.

    In this image, both circles (threads or processes) will try to give space to the other by moving left and right. But they can't move any further.

    enter image description here

    0 讨论(0)
  • 2021-01-29 17:44

    DEADLOCK Deadlock is a condition in which a task waits indefinitely for conditions that can never be satisfied - task claims exclusive control over shared resources - task holds resources while waiting for other resources to be released - tasks cannot be forced to relinguish resources - a circular waiting condition exists

    LIVELOCK Livelock conditions can arise when two or more tasks depend on and use the some resource causing a circular dependency condition where those tasks continue running forever, thus blocking all lower priority level tasks from running (these lower priority tasks experience a condition called starvation)

    0 讨论(0)
  • 2021-01-29 17:47

    All the content and examples here are from

    Operating Systems: Internals and Design Principles
    William Stallings
    8º Edition

    Deadlock: A situation in which two or more processes are unable to proceed because each is waiting for one the others to do something.

    For example, consider two processes, P1 and P2, and two resources, R1 and R2. Suppose that each process needs access to both resources to perform part of its function. Then it is possible to have the following situation: the OS assigns R1 to P2, and R2 to P1. Each process is waiting for one of the two resources. Neither will release the resource that it already owns until it has acquired the other resource and performed the function requiring both resources. The two processes are deadlocked

    Livelock: A situation in which two or more processes continuously change their states in response to changes in the other process(es) without doing any useful work:

    Starvation: A situation in which a runnable process is overlooked indefinitely by the scheduler; although it is able to proceed, it is never chosen.

    Suppose that three processes (P1, P2, P3) each require periodic access to resource R. Consider the situation in which P1 is in possession of the resource, and both P2 and P3 are delayed, waiting for that resource. When P1 exits its critical section, either P2 or P3 should be allowed access to R. Assume that the OS grants access to P3 and that P1 again requires access before P3 completes its critical section. If the OS grants access to P1 after P3 has finished, and subsequently alternately grants access to P1 and P3, then P2 may indefinitely be denied access to the resource, even though there is no deadlock situation.

    APPENDIX A - TOPICS IN CONCURRENCY

    Deadlock Example

    If both processes set their flags to true before either has executed the while statement, then each will think that the other has entered its critical section, causing deadlock.

    /* PROCESS 0 */
    flag[0] = true;            // <- get lock 0
    while (flag[1])            // <- is lock 1 free?
        /* do nothing */;      // <- no? so I wait 1 second, for example
                               // and test again.
                               // on more sophisticated setups we can ask
                               // to be woken when lock 1 is freed
    /* critical section*/;     // <- do what we need (this will never happen)
    flag[0] = false;           // <- releasing our lock
    
     /* PROCESS 1 */
    flag[1] = true;
    while (flag[0])
        /* do nothing */;
    /* critical section*/;
    flag[1] = false;
    

    Livelock Example

    /* PROCESS 0 */
    flag[0] = true;          // <- get lock 0
    while (flag[1]){         
        flag[0] = false;     // <- instead of sleeping, we do useless work
                             //    needed by the lock mechanism
        /*delay */;          // <- wait for a second
        flag[0] = true;      // <- and restart useless work again.
    }
    /*critical section*/;    // <- do what we need (this will never happen)
    flag[0] = false; 
    
    /* PROCESS 1 */
    flag[1] = true;
    while (flag[0]) {
        flag[1] = false;
        /*delay */;
        flag[1] = true;
    }
    /* critical section*/;
    flag[1] = false;
    

    [...] consider the following sequence of events:

    • P0 sets flag[0] to true.
    • P1 sets flag[1] to true.
    • P0 checks flag[1].
    • P1 checks flag[0].
    • P0 sets flag[0] to false.
    • P1 sets flag[1] to false.
    • P0 sets flag[0] to true.
    • P1 sets flag[1] to true.

    This sequence could be extended indefinitely, and neither process could enter its critical section. Strictly speaking, this is not deadlock, because any alteration in the relative speed of the two processes will break this cycle and allow one to enter the critical section. This condition is referred to as livelock. Recall that deadlock occurs when a set of processes wishes to enter their critical sections but no process can succeed. With livelock, there are possible sequences of executions that succeed, but it is also possible to describe one or more execution sequences in which no process ever enters its critical section.

    Not content from the book anymore.

    And what about spinlocks?

    Spinlock is a technique to avoid the cost of the OS lock mechanism. Typically you would do:

    try
    {
       lock = beginLock();
       doSomething();
    }
    finally
    {
       endLock();
    }
    

    A problem start to appear when beginLock() costs much more than doSomething(). In very exagerated terms, imagine what happens when the beginLock costs 1 second, but doSomething cost just 1 millisecond.

    In this case if you waited 1 millisecond, you would avoid being hindered for 1 second.

    Why the beginLock would cost so much? If the lock is free is does not cost a lot (see https://stackoverflow.com/a/49712993/5397116), but if the lock is not free the OS will "freeze" your thread, setup a mechanism to wake you when the lock is freed, and then wake you again in the future.

    All of this is much more expensive than some loops checking the lock. That is why sometimes is better to do a "spinlock".

    For example:

    void beginSpinLock(lock)
    {
       if(lock) loopFor(1 milliseconds);
       else 
       {
         lock = true;
         return;
       }
    
       if(lock) loopFor(2 milliseconds);
       else 
       {
         lock = true;
         return;
       }
    
       // important is that the part above never 
       // cause the thread to sleep.
       // It is "burning" the time slice of this thread.
       // Hopefully for good.
    
       // some implementations fallback to OS lock mechanism
       // after a few tries
       if(lock) return beginLock(lock);
       else 
       {
         lock = true;
         return;
       }
    }
    

    If your implementation is not careful, you can fall on livelock, spending all CPU on the lock mechanism.

    Also see:

    https://preshing.com/20120226/roll-your-own-lightweight-mutex/
    Is my spin lock implementation correct and optimal?

    Summary:

    Deadlock: situation where nobody progress, doing nothing (sleeping, waiting etc..). CPU usage will be low;

    Livelock: situation where nobody progress, but CPU is spent on the lock mechanism and not on your calculation;

    Starvation: situation where one procress never gets the chance to run; by pure bad luck or by some of its property (low priority, for example);

    Spinlock: technique of avoiding the cost waiting the lock to be freed.

    0 讨论(0)
  • 2021-01-29 17:52

    Imagine you've thread A and thread B. They are both synchronised on the same object and inside this block there's a global variable they are both updating;

    static boolean commonVar = false;
    Object lock = new Object;
    
    ...
    
    void threadAMethod(){
        ...
        while(commonVar == false){
             synchornized(lock){
                  ...
                  commonVar = true
             }
        }
    }
    
    void threadBMethod(){
        ...
        while(commonVar == true){
             synchornized(lock){
                  ...
                  commonVar = false
             }
        }
    }
    

    So, when thread A enters in the while loop and holds the lock, it does what it has to do and set the commonVar to true. Then thread B comes in, enters in the while loop and since commonVar is true now, it is be able to hold the lock. It does so, executes the synchronised block, and sets commonVar back to false. Now, thread A again gets it's new CPU window, it was about to quit the while loop but thread B has just set it back to false, so the cycle repeats over again. Threads do something (so they're not blocked in the traditional sense) but for pretty much nothing.

    It maybe also nice to mention that livelock does not necessarily have to appear here. I'm assuming that the scheduler favours the other thread once the synchronised block finish executing. Most of the time, I think it's a hard-to-hit expectation and depends on many things happening under the hood.

    0 讨论(0)
提交回复
热议问题