In this, and the subsequent chapters we glance through the main types of parallel programs following the monograph of Apt and de Boer and Olderog (). Undoubtedly, the simplest appearance of parallelism manifests itself in the field of disjoint parallel programs. Disjoint parallel programs are programs the components of which cannot change the variables used by each other, that is, they can only share read-only variables. Let us recall: a variable is changed in a program if occurs on the left hand side of an assignment command in the program. The set of variables changed by S was denoted by . In accordance with this, we can define the disjointness of two programs.
Let us give the precise definition of disjoint parallel programs.
Thus, parallel composition can be formulated with while programs only, hence nested parallelism is not allowed. Parallelism is allowed, however, in sequential composition, conditional statements and while loops. If , then S is called a disjoint parallel composition, while are called the components of S. In a manner analogue to the definition of disjointness of programs, it can be useful to talk about a program being disjoint or apart from an expression of set of states expressed by a first order formula.
The operational interpretation of a disjoint parallel composition is rather straightforward.
A notational convention can be agreed when producing computations of disjoint parallel programs: the program should be denoted by E, and it represents the end of a (terminating) computation. The absence of blocking lemma also holds in this case, mutatis mutandis.
In general, determinism is not valid in this case. We can still say something, however, on the result of a computation. We state without proofs two lemmas which assert properties of disjoint parallel programs interesting in themselves.
Definition 97. Let be a reduction system, that is, is a set and (we can assume now that there is only one relation over ). Let be the reflexive, transitive closure of . Then is said to satisfy the diamond property, if, for every a, b, with , the relations and imply that there exists such that and . The relation is said to be confluent, if, for every a, b, , and imply and for some d.
There is a close connection between diamond property and confluence as Newman’s lemma below states.
Proof. The proof is very well-known in the literature, we just give a sketch of it here. Figure 3.1 gives a clear insight into the main idea of the proof.
Firstly, let denote the n-fold relation obtained from . By we understand . By induction on n we can prove that, if and with , then there exists d, e such that and, moreover, and are valid.
Let us assume and . By the finiteness of , and for some n, . Assume n and . By the proof above, there exists with such that . Then, for and , the induction hypothesis applies.
We remark that Newman’s lemma is not valid in this simple form if we have several, possibly overlapping, reductions in the reduction system (cf. ()). The following lemma treats the case when is not finite.
Without proof state the following lemma, called the yield lemma.
We can apply these results to disjoint parallel programs, as the following lemma shows.
As a corollary, we can express Lemma 100 in terms of disjoint parallel programs.
Obviously, the sequential execution of the parallel components leads to a computation of a parallel construction. Furthermore, Corollary 102 involves that it is enough to implement a computation like this in order to get the result of a computation making use of the parallel components in any order, if it exists. Thus the following sequentialization lemma is true.
From this result a suggestion for a rule for the partial correctness calculus of disjoint parallel programs readily emerges:
The sequentialization rule states that in order to prove a partial correctness assertion for a parallel composition, it is enough to prove the same assertion for the sequential analogue of the parallel program. The sequentialization rule seems to be a good candidate at first glance, though we would like to have something more handy, which does not make it necessary to introduce a huge number of intermediate assertions in the course of the proof. The disjoint parallelism rule was proposed by Hoare:
where , if . The stipulations in the disjoint parallel rule are absolutely necessary in order to avoid unsound conclusions.
Example 104. () Let
Then and , which would result in the false conclusion
if we ignored the stipulations of the disjoint parallelism rule.
We quickly encounter the delimitations of using the disjoint parallel rule alone as a rule of inference for disjoint parallel programs. It can be shown, for example, (see ()) that the correctness formula
cannot be proven with the disjoint parallelism rule alone. Instead, the following trick helps us out.
Example 105. () Take a fresh variable Z. Then
. The disjoint parallelism rule and the consequence rule gives
But there is a slight problem: does not follow from , thus, we are not entitled to infer
by referring to the consequence rule. We have, however,
The moral is: introducing auxiliary variables may help the proof, but how can we get rid of the auxiliary variables afterwards? First of all, what can be considered as an auxiliary variable:
With this in hand we can formulate the missing rule for the partial correctness calculus of disjoint parallel programs: the auxiliary variable rule.
where and is obtained from S by writing in place of every assignment to a variable in . If we identify programs with the programs obtained by dropping some of the commands from them, we can prove the partial correctness formula of Example 105 with the new set of rules. We remark that when formulating proof outlines, it must stated explicitly which variables are understood as auxiliary variables and which are not. The formation rule of a proof outline involving a parallel command is as follows:
where are proof outlines corresponding to .
Finally, we remark that the calculus obtained by augmenting the total correctness calculus with the rules and is appropriate for proving total correctness assertions for disjoint parallel programs. It can be proved that both the partial and total correctness calculus is sound and admits a relative completeness property like in the case of while-programs.