I\'ve been searching information on Peterson\'s algorithm but have come across references stating it does not satisfy starvation but only deadlock. Is this true? and if so c
As Ben Jackson suspects, the problem is with a generalized algorithm. The standard 2-process Peterson's algorithm satisfies the no-starvation property.
Apparently, Peterson's original paper actually had an algorithm for N
processors. Here is a sketch that I just wrote up, in a C++-like language, that is supposedly this algorithm:
// Shared resources
int pos[N], step[N];
// Individual process code
void process(int i) {
int j;
for( j = 0; j < N-1; j++ ) {
pos[i] = j;
step[j] = i;
while( step[j] == i and some_pos_is_big(i, j) )
; // busy wait
}
// insert critical section here!
pos[i] = 0;
}
bool some_pos_is_big(int i, int j) {
int k;
for( k = 0; k < N-1; k++ )
if( k != i and pos[k] >= j )
return true;
}
return false;
}
Here's a deadlock scenario with N = 3
:
pos[0] = 0
and step[0] = 0
and then waits.pos[2] = 0
and step[0] = 2
and then waits.pos[1] = 0
and step[0] = 1
and then waits.step[0]
and so sets j = 1
, pos[2] = 1
, and step[1] = 2
.pos[2]
is big.j = 2
. It this escapes the for loop and enters the critical section. After completion, it sets pos[2] = 0
but immediately starts competing for the critical section again, thus setting step[0] = 2
and waiting.step[0]
and proceeds as process 2 before.References. All details obtained from the paper "Some myths about famous mutual exclusion algorithms" by Alagarsamy. Apparently Block and Woo proposed a modified algorithm in "A more efficient generalization of Peterson's mutual exclusion algorithm" that does satisfy no-starvation, which Alagarsamy later improved in "A mutual exclusion algorithm with optimally bounded bypasses" (by obtaining the optimal starvation bound N-1
).
A Rex is wrong with the deadlock situation.
(as a side note: the correct term would be starvation scenario, since for a deadlock there are at least two threads required to be 'stuck' see wikipedia: deadlock and starvation)
As process 2 and 1 go into level 0, step[0] is set to either 1 or 2 and thus making the advance condition of process 0 false since step[0] == 0
is false.
The peterson algorithm for 2 processes is a little simpler and does protect against starvation.
The peterson algorithm for n processes is much more complicated
To have a situation where a process starves the condition step[j] == i and some_pos_is_big(i, j)
must be true forever. This implies that no other process enters the same level (which would make step[j] == i
false) and that at least one process is always on the same level or on a higher level as i (to guarantee that some_pos_is_big(i, j)
is kept true)
Moreover, only one process can be deadlocked in this level j
. If two were deadlocked then for one of them step[j] == i
would be false and therefor wouldn't be deadlocked.
So that means no process can't enter the same level and there must always be a a process in a level above.
As no other process could join the processes above (since they can't get into level j and therefor not above lelel j) at least one process must be deadlocked too above or the process in the critical section doesn't release the critical section.
If we assume that the process in the critical section terminates after a finite time, then only one of the above processes must be deadlocked.
But for that one to be deadlocked, another one above must be deadlocked etc. However, there are only finite processes above, so eventually the top process can't be deadlocked, as it'll advance once the critical section is given free.
And therefor the peterson algorithm for n processes protects against starvation!
I suspect the comment about starvation is about some generalized, N-process Peterson's Algorithm. It is possible to construct an N-process version with bounded waiting, but without having one in particular to discuss we can't say why that particular generalization might be subject to starvation.
A quick Google turned up this paper which includes pseudocode. As you can see, the generalized version is much more complex (and expensive).