**Table of Contents**

In this chapter we discuss the question of verification of programs containing autonomous substructures called procedures. We augment the language of while programs by syntactical elements enabling incorporation of procedures into the program text. Let denote a set of procedure names, assume , , are variables standing for procedure names. Then a recursive program is a pair standing of a finite set of declarations and the program body itself.

We may use the alternative formulation for
when writing concrete programs. The relational semantics of a procedure is a partial
function understood as follows: iff the
procedure started in the state *s* successfully terminates in state . This
means that the relational semantics of a program can only be defined as a function of the
denotational (or relational) semantics of the procedures occurring in it, thus it is a function from
into . Throughout this section we deal with one
parameter procedures only. Moreover, in what follows we are concerned mostly with partial
correctness questions, we deal with the termination of recursive procedures in the end of the
chapter.

We have to convince ourselves that this definition makes sense. We need to establish that the least upper bound of the last point really exists. For this, observe that is a complete lattice with respect to set inclusion as ordering. This means that the Kanster–Tarski theorem applies provided is monotone for arbitrary .

Lemma 72.LetCbe a program, and be a partial function defined as in Definition 71. Then is monotone.

Proof.A straightforward induction onC.

Thus the least fixpoint required in the definition of the denotational semantics really exists. We state without proof a lemma below, which lies in the basis of reasoning with least fixpoints. This is the so-called computation induction method of Scott and de Bakker ([4]).

Lemma 73.Let be a complete lattice with bottom element. Let be monotone, and let . Then , and together imply .

As a special case, we would also state the so-called Scott induction.

Lemma 74.Let be continuous and be admissible. That is, for every sequence fromL, if, for alln, , then . Then and together imply .

Definition 75.Let , and let . We say thatris without side effects with respect toVif, for every , implies and, for every and , whereDis the base set of the underlying model, implies . IfCis a program and , then we say thatris without side effects with respect toC.

We can formulate the method called computation induction for proving partial correctness of parameterless recursive procedures as in the theorem below.

Theorem 76.Let , andCbe a recursive program with one recursive variable. Assumeris without side effects with respect toC. Then

Computation induction readily translates into an inference rule: the recursion rule.

The rule is understood as follows: assume we have a proof in Hoare logic for . Then we can infer without hypotheses. Hoare logic with the recursion rule is powerful enough to prove statements on parameterless recursive procedures, as the following example shows.

Example 78.([3]) procedure F;begin

if X=0 then

Y:=1

else

begin X:=X-1; F; X:=X+1; Y=Y*X end;

fi

end;

F;

The partial correctness assertion can be proved as follows. We formulate the proof as a proof outline.

(hypothesis)

;

else

;

;

;

F;

;

;

The proof outline demonstrates , from which, by the recursion rule, follows.

The previous proof system based on the recursion rule is not complete for recursive procedures. Consider, for example, the procedure of Example 78 with the precondition and postcondition . Obviously, the partial correctness assertion is true, though it cannot be proved solely with the recursion rule. Firstly, we present without proof a semantical method for verifying partial correctness of recursive programs: the Park’s fixpoint induction method.

Theorem 79.LetCbe a program, let , be sets of states. Then the following statements are valid.1.

2.

**Hint**

Observe that from it follows that

*r*is a prefixpoint of . By the Knaster–Tarski theorem, , from which, combined with , immediately follows.If , then, with , we have . Moreover, applying the method of computational induction in Theorem 76, we can prove , where .

We illustrate the proof method by an example.

Example 80.([3]) Let us consider the program of Example 78 with the partial correctness assertion . By the application of Point 1 of Theorem 79 we demonstrate that this partial correctness formula indeed holds true. Let . First of all, we compute , whereCis the body ofF.

Moreover, obviously holds, thus is valid, too.

The previous version of the fixpoint induction theorem is not expressible in Hoare logic, since it uses relation like , which is cannot be corresponded to the usual forms of formulas of Hoare logic. A solution to this problem is to add auxiliary variables to the correctness proofs of recursive programs, which are able to store the values of the program variables before the procedure application. We state a different version of fixpoint induction theorem, with the necessary definitions prior to it.

The next theorem is a version of the fixpoint induction theorem, formulated with auxiliary variables. The proof is rather technical, hence omitted here.

The previous theorem directly gives a proof rule for the Hoare calculus, the so-called rule of adaptation.

In what follows, we illustrate by some examples the proof method for partial correctness of recursive program by applying the rule of adaptation. The proof of Theorem 82, together with a historical background of the development of the adaptation rule, can be found in ([3]).

Example 84.([1]) LetFbe the command defined in Example 78. We prove now with the help of the adaptation rule. We present the proof in the form of a proof outline.procedure

F;begin

if X=0 then

Y:=1

else

X:=X-1;

F;

X:=X+1;

Y=Y*X

fi

end;

We have to prove from the hypothesis . Applying the rule of adaptation with , , we obtain . Furthermore, , which, together with the consequence rule, yields the result. We can now derive by the recursion rule .

Example 85.([?]) The program below supplies the value , if .procedure F;

begin

if then

N=N-1;

F;

S:=S+1;

F;

N:=N+1;

fi;

end;

We have to fill in the gaps in the proof by applying the adaptation rule for every call of

F. Moreover, the derivability of the conclusion follows only by taking into account the recursion rule, as well. For example, if holds, then follows by the rule of adaptation. We can conclude , by taking into account the relation and the consequence rule.