In this, and the subsequent chapters we glance through the main types of parallel programs following
the monograph of Apt and de Boer and Olderog ([1]). Undoubtedly, the simplest appearance of parallelism
manifests itself in the field of disjoint parallel programs. Disjoint parallel programs are programs the
components of which cannot change the variables used by each other, that is, they can only share
read-only variables. Let us recall: a variable is changed in a program if occurs on the left
hand side of an assignment command in the program. The set of variables changed by *S* was
denoted by . In accordance with this, we can define the disjointness of two
programs.

Example 93.Let and . Then and , this means and are disjoint. Thus, disjoint programs can read the same variables.

Let us give the precise definition of disjoint parallel programs.

Definition 94.A programDis a disjoint parallel program if it is formed by the Backus–Naur form as below.

Thus, parallel composition can be formulated with while programs only, hence nested parallelism is not
allowed. Parallelism is allowed, however, in sequential composition, conditional statements and
while loops. If , then *S* is
called a disjoint parallel composition, while are called the components of *S*. In a
manner analogue to the definition of disjointness of programs, it can be useful to talk about a
program being disjoint or apart from an expression of set of states expressed by a first order
formula.

The operational interpretation of a disjoint parallel composition is rather straightforward.

A notational convention can be agreed when producing computations of disjoint parallel programs: the
program should be denoted by *E*, and it represents the end of a
(terminating) computation. The absence of blocking lemma also holds in this case, mutatis
mutandis.

In general, determinism is not valid in this case. We can still say something, however, on the result of a computation. We state without proofs two lemmas which assert properties of disjoint parallel programs interesting in themselves.

Definition 97.Let be a reduction system, that is, is a set and (we can assume now that there is only one relation over ). Let be the reflexive, transitive closure of . Then is said to satisfy the diamond property, if, for everya,b, with , the relations and imply that there exists such that and . The relation is said to be confluent, if, for everya,b, , and imply and for somed.

There is a close connection between diamond property and confluence as Newman’s lemma below states.

Lemma 98.Let be a terminating reduction system. Then, if satisfies the diamond property, then it is confluent.

Proof.The proof is very well-known in the literature, we just give a sketch of it here. Figure 3.1 gives a clear insight into the main idea of the proof.Firstly, let denote the

n-fold relation obtained from . By we understand . By induction onnwe can prove that, if and with , then there existsd,esuch that and, moreover, and are valid.Let us assume and . By the finiteness of , and for some

n, . Assumenand . By the proof above, there exists with such that . Then, for and , the induction hypothesis applies.

We remark that Newman’s lemma is not valid in this simple form if we have several, possibly overlapping, reductions in the reduction system (cf. ([10])). The following lemma treats the case when is not finite.

Lemma 99.Assume is a reduction system which satisfies the diamond property. Let and such that . Assume for some infinite reduction sequence. Then there exists an infinite reduction sequence passing throughc.

Without proof state the following lemma, called the yield lemma.

Lemma 100.Let be a reduction system satisfying the diamond property. Let . Then either every reduction sequence starting fromais infinite, or there existsbsuch that and, for everyc, is not valid.

We can apply these results to disjoint parallel programs, as the following lemma shows.

Lemma 101.Let , be disjoint parallel programs, assume is the relation defined in the operational semantics. Then the reduction system satisfies the diamond property.

As a corollary, we can express Lemma 100 in terms of disjoint parallel programs.

Corollary 102.LetDbe a disjoint parallel program, and assume . Then either every computation starting from is infinite, or there exists a unique statersuch that .

Obviously, the sequential execution of the parallel components leads to a computation of a parallel construction. Furthermore, Corollary 102 involves that it is enough to implement a computation like this in order to get the result of a computation making use of the parallel components in any order, if it exists. Thus the following sequentialization lemma is true.

Lemma 103.Let , be pairwise disjoint while programs. Then, for every statet, is defined iff is defined, and, in this case,

From this result a suggestion for a rule for the partial correctness calculus of disjoint parallel programs readily emerges:

The sequentialization rule states that in order to prove a partial correctness assertion for a parallel composition, it is enough to prove the same assertion for the sequential analogue of the parallel program. The sequentialization rule seems to be a good candidate at first glance, though we would like to have something more handy, which does not make it necessary to introduce a huge number of intermediate assertions in the course of the proof. The disjoint parallelism rule was proposed by Hoare:

where , if . The stipulations in the disjoint parallel rule are absolutely necessary in order to avoid unsound conclusions.

Example 104.([1]) Let

Then and , which would result in the false conclusion

if we ignored the stipulations of the disjoint parallelism rule.

We quickly encounter the delimitations of using the disjoint parallel rule alone as a rule of inference for disjoint parallel programs. It can be shown, for example, (see ([1])) that the correctness formula

cannot be proven with the disjoint parallelism rule alone. Instead, the following trick helps us out.

Example 105.([1]) Take a fresh variableZ. Then

and

. The disjoint parallelism rule and the consequence rule gives

But there is a slight problem: does not follow from , thus, we are not entitled to infer

by referring to the consequence rule. We have, however,

which means

The moral is: introducing auxiliary variables may help the proof, but how can we get rid of the auxiliary variables afterwards? First of all, what can be considered as an auxiliary variable:

Definition 106.Let be a set of variables of a programS. Then is a set of auxiliary variables, if every occurrence of a variable lies within an assignment statement , with some and expressiont.

With this in hand we can formulate the missing rule for the partial correctness calculus of disjoint parallel programs: the auxiliary variable rule.

where
and is obtained from *S* by writing in
place of every assignment to a variable in . If we identify programs with the programs obtained by
dropping some of the commands from them, we can prove the partial correctness formula of
Example 105 with the new set of rules. We remark that when formulating proof
outlines, it must stated explicitly which variables are understood as auxiliary variables and which
are not. The formation rule of a proof outline involving a parallel command is as follows:

where are proof outlines corresponding to .

Finally, we remark that the calculus obtained by augmenting the total correctness calculus with the rules and is appropriate for proving total correctness assertions for disjoint parallel programs. It can be proved that both the partial and total correctness calculus is sound and admits a relative completeness property like in the case of while-programs.