## Mutual exclusion

As the last section of this chapter, we present one of the most widely-known problems of parallel processes with synchronization: the mutual exclusion problem. The question consists of several subquestions, the task is to find a way for synchronizing some number of processes such that

• (mutual exclusion): the operation of the processes is mutually exclusive in their critical section,

• (absence of blocking): the synchronization discipline does not prevent the processes from running indefinitely,

• (lack of starvation): if a process is trying to acquire the resource eventually it will succeed.

Following the monographs of [1] and [6] and [8] we give a short account of the main algorithms solving the mutual exclusion problem. We assume that a program S is given consisting of parallel components of the form

The various parts of the i-th component will be denoted by , , , , respectively. In fact, we assume that S takes up the following form:

where is a loop free while program serving for the initialization of the variables of the entry and release sections. Firstly, we expound the possibly simplest solution of mutual exclusion between two processes: Peterson’s mutual exclusion algorithm.

Init: flag[0]:=false; flag[1]:=false; turn:=0;

and is

while true do

1: flag[0]:=true;

2: turn:=0;

3: while (flag[1] turn=0) do skip od;

4: critical section

5: flag[0]:=false;

6: non-critical section

od

while is

while true do

7: flag[1]:=true;

8: turn:=1;

9: while (flag[0] turn=1) do skip od;

10: critical section

11: flag[1]:=false;

12: non-critical section

od

First of all, we demonstrate informally that cannot deadlock. The only program point where can wait is the part with label 3. Similarly, can wait only at step with label 9. This can only happen if (flag[1] turn=0) and (flag[0] turn=1) are both true, but this implies (turn=0 turn=1), which is impossible.

Next, we show that it is impossible that control resides in step 4 and step 10 at the same time. Assume without loss of generality that the execution of is at step 4. This could only happen, if previously turn=1 was true. To enter in , must encounter or . Since is in its critical section, is true, hence turn=0 must hold. But this is impossible, as the following argument shows.

• enters its critical section since is false. This can only happen, when has not executed step 7 until then. Next, executes step 7 and 8, setting turn:=1. This means, cannot enter its critical section in this way.

• enters its critical section by reason of turn=1. This means, step 2 of must precede step 8 of . Then reads flag[0] turn=1, which means again that cannot enter its critical section.

Finally, we prove that the lack of starvation property also holds. That is, if a process sets its flag to true, then it eventually enters its critical section. Assume is true, but is waiting at step 3. If is in , then eventually it sets to false, which enables to enter in . If did not make use of the occasion, in the subsequent steps sets to true and turn to 1, which results that must come to a halt and then can enter its critical section.

Peterson has also generalised his algorithm for n processes, which we describe below.

Init: flag[k]=0 and turn=0 for all

Moreover, for all

while true do

j:=1;

while j < N-1 do

flag[i]:=j;

turn[j]:=i;

while then do skip od

j:=j+1

od

critical section

flag[i]:=0

noncritical section

od

It can be shown that the generalized Peterson-algorithm also possesses the properties of deadlock freedom, mutual exclusion and absence of starvation. However, for n processes, the above algorithm can become very difficult. In what follows, we depict an algorithm handling mutual exclusion for more than two processes in an elegant way, this is Lamport’s Bakery algorithm.

The algorithm dislocates control between n processes. The actual process checks all other processes sequentially and waits for the other processes with lower index. Ties are resolved by assigning process identifiers to processes besides indices.

Init: number[1]:=0;…; number[n]:=0;

choosing[1]:=false;…; choosing[n]:=false;

and the body of the perpetual loop of component is

1: choosing[i]:=true;

2: number[i]:=1+max{number[1],…,number[n]};

3: choosing[i]:=false;

4: for all do

5: wait choosing[j]=false;

6: wait number[j]=0 or (number[i],i)<(number[j],j)

7: od

8: critical section

9: number[i]:=0;

10: non-critical section,

where processes are order lexicographically: iff or and . Leaning on ([8]), we give a sketch of the proof of the fact that the Bakery algorithm fulfills the properties expected of mutual exclusion algorithms. Firstly, a lemma ensures mutual exclusion. In what follows, the i-th process is simply denoted by i.

Lemma 131. Let us call steps 1-3 of the doorway part, steps 4-7 the bakery. Let D be the set of indices of components residing in the doorway, and B the set of indices of processes standing at the bakery stage. Let . Assume and for some . Then

Proof. Let and and . Assume i enters C before k. Then, before i enters C, it must have read at step 5. Let this be event . We have two possible cases:

• k chooses after event . Then, since is already settled, the only possibility is .

• k chooses before . This means that i, when reading at step 6, reads the "correct" value of , and then i enters C. But this can happen only if .

Corollary 132. No two processes can be in C at the same time.

Lemma 133. (deadlock freedom) The bakery algorithm never comes to a deadlock.

Proof. If we assume that the non-critical sections are deadlock free, a process can wait only at step 5 and 6. Since the condition of the await statement in step 5 fulfills after finitely many steps for all of the processes, deadlock can occur only at step 6. But there are only finitely many processes in B, this means one of them inevitably enters in C, and the algorithm continues.

Lemma 134. (lack of starvation) Assume that, for every process i, terminates. Then any process entering D reaches eventually its critical section.

Proof. Let i enter D. Then i chooses and enters B. Every process entering D after i must have a greater number than i. Since no noncritical section can lead to an infinite computation, there must come a state when i enters C in order to ensure the continuity of the algorithm.

Finally, following [1], another well-known solution for the mutual exclusion problem is described, which is the algorithm of Dijkstra using semaphores. A semaphore is an integer variable, with two operations allowed on it:

A binary semaphore is a variable taking values 0 and 1, where all operations are understood modulo 2. Let be a binary semaphore. Then the mutual exclusion algorithm of Dijkstra can be formulated as follows:

= out:=true; who:=0; where

=while true do

;

await out then out:=false; who:=i end;

;

out:=true; who:=0

od

Observe that the operators P and V have now been extended with actions governing an auxiliary variable . The semaphore indicates wether there are processes in their critical sections. Though we did not lay the theoretical foundation for a formal treatment, relying on the intuition of the reader, after ([1]), we present a proof for the mutual exclusion and absence of blocking properties. For invariant, we choose

Then the one below is a valid proof outline for :

while true do

await out then out:=false; who:=i end;

;

out:=true; who:=0

od

The proof outlines in themselves are correct, we have to verify their interference freedom to be able to reason about mutual exclusion. For example, proving the independence of out:=true; who:=0 from the proof outline for needs the verification for proof outlines like the following one

out:=true; who:=0

,

which is valid by . As to the independence of await out then out:=false; who:=i end, we consider, for instance, the proof outline below

await out then out:=false; who:=i end;

,

which can be verified by the synchronization rule. Taking all these into account, we can assert that the proof outlines for are free from interference, which allows us to examine them in relation to mutual exclusion. Intuitively, two processes are mutually exclusive, if it never happens that both of them can enter their critical sections in the same time. This can be formalized as saying that the preconditions for the critical sections in the above proof outlines are mutually exclusive, that is,

holds for every i, j . Taking into account the formulations of , the validity of this property is immediate.

Leaning on the intuitive understanding of nonblocking property, we verify its validity starting from the above proof outlines. Intuitively, since stands as the condition of the while loop for every process, blocks can take place only if all of the processes are about to execute their statements and they are blocked their. This leads us to the conclusion that in states like this the preconditions for the statements of should hold together with , which prevents the processes from entering their critical sections. Thus, let . We have to check for the absence of blocking that

can never be valid. This is implied by the fact , together with the implications and .