We begin with some simple examples of algorithms in the message passing model.

We start with a simple algorithm *
Spanning-Tree-Broadcast
* for the (single message) broadcast problem, assuming that a spanning tree of the network graph with nodes (processors) is already given. Later, we will remove this assumption. A processor wishes to send a message to all other processors. The spanning tree rooted at is maintained in a distributed fashion: Each processor has a distinguished channel that leads to its

*
Spanning-Tree-Broadcast
*

Initially is in transit from to all its children in the spanning tree. Code for : 1 upon receiving no message: // first computation event by 2Code for , , : 3 upon receiving from parent: 4`TERMINATE`

to all children 5`SEND`

`TERMINATE`

The algorithm *
Spanning-Tree-Broadcast
* is correct whether the system is synchronous or asynchronous. Moreover, the message and time complexities are the same in both models.

Using simple inductive arguments we will first prove a lemma that shows that by the end of round , the message reaches all processors at distance (or less) from in the spanning tree.

**Lemma 13.1 **
*In every admissible execution of the broadcast algorithm in the synchronous model, every processor at distance from in the spanning tree receives the message in round .*

**Proof. **We proceed by induction on the distance of a processor from . First let . It follows from the algorithm that each child of receives the message in round 1.

Assume that each processor at distance received the message in round . We need to show that each processor at distance receives the message in round . Let be the parent of in the spanning tree. Since is at distance from , by the induction hypothesis, received in round . By the algorithm, will hence receive in round .

By Lemma 13.1 the time complexity of the broadcast algorithm is , where is the depth of the spanning tree. Now since is at most (when the spanning tree is a chain) we have:

**Theorem 13.2 **
*There is a synchronous broadcast algorithm for processors with message complexity and time complexity , when a rooted spanning tree with depth is known in advance.*

We now move to an asynchronous system and apply a similar analysis.

**Lemma 13.3 **
*In every admissible execution of the broadcast algorithm in the asynchronous model, every processor at distance from in the spanning tree receives the message by time .*

We proceed by induction on the distance of a processor from . First let . It follows from the algorithm that is initially in transit to each processor at distance from . By the definition of time complexity for the asynchronous model, receives by time 1.

Assume that each processor at distance received the message at time . We need to show that each processor at distance receives the message by time . Let be the parent of in the spanning tree. Since is at distance from , by the induction hypothesis, sends to when it receives at time . By the algorithm, will hence receive by time .

We immediately obtain:

**Theorem 13.4 **
*There is an asynchronous broadcast algorithm for processors with message complexity and time complexity , when a rooted spanning tree with depth is known in advance.*

The asynchronous algorithm called *
Flood
*, discussed next, constructs a spanning tree rooted at a designated processor . The algorithm is similar to the Depth First Search (DFS) algorithm. However, unlike DFS where there is just one processor with “global knowledge” about the graph, in the

`Flood`

`Flood`

Each processor has four local variables. The links adjacent to a processor are identified with distinct numbers starting from 1 and stored in a local variable called . We will say that the **spanning tree has been constructed**, when the variable *parent* stores the identifier of the link leading to the parent of the processor in the spanning tree, except that this variable is *
NONE
* for the designated processor ;

The code of each processor is composed of segments. There is a segment (lines 1–4) that describes how local variables of a processor are initialised. Recall that the local variables are initialised that way before time 0. The next three segments (lines 5–11, 12–15 and 16–19) describe the instructions that any processor executes in response to having received a message: ≤*adopt*≥, ≤*approved*≥ or ≤*rejected*≥. The last segment (lines 20–22) is only included in the code of processor . This segment is executed only when the local variable *parent* of processor is *
NIL
*. At some point of time, it may happen that more than one segment can be executed by a processor (e.g., because the processor received ≤

*
Flood
*

Code for any processor , 12`INITIALISATION`

parent3`NIL`

children4other5≤`PROCESS MESSAGE`

adopt≥ that has arrived on link 6`IF`

parent7`NIL`

`THEN`

parent8≤`SEND`

approved≥ to link 9≤`SEND`

adopt≥ to all links inneighbours10`ELSE`

≤`SEND`

rejected≥ to link 11≤`PROCESS MESSAGE`

approved≥ that has arrived on link 12childrenchildren13`IF`

childrenotherneighbours{parent} 14`THEN`

15`TERMINATE`

≤`PROCESS MESSAGE`

rejected≥ that has arrived on link 16otherother17`IF`

childrenotherneighbours{parent} 18`THEN`

Extra code for the designated processor 19`TERMINATE`

`IF`

parent20`NIL`

`THEN`

parent21`NONE`

≤`SEND`

adopt≥ to all links inneighbours

Let us outline how the algorithm works. The designated processor sends an ≤*adopt*≥ message to all its neighbours, and assigns *
NONE
* to the

`NIL`

`NONE`

When a processor processes message ≤*adopt*≥ for the first time, the processor assigns to its own *parent* variable the identifier of the link on which the message has arrived, responds with an ≤*approved*≥ message to that link, and forwards an ≤*adopt*≥ message to every other link. However, when a processor processes message ≤*adopt*≥ again, then the processor responds with a ≤*rejected*≥ message, because the *parent* variable is no longer *
NIL
*.

When a processor processes message ≤*approved*≥, it adds the identifier of the link on which the message has arrived to the set *children*. It may turn out that the sets *children* and *other* combined form identifiers of all links adjacent to the processor except for the identifier stored in the *parent* variable. In this case the processor enters a terminating state.

When a processor processes message ≤*rejected*≥, the identifier of the link is added to the set *other*. Again, when the union of *children* and *other* is large enough, the processor enters a terminating state.

We now argue that *
Flood
* constructs a spanning tree. The key moments in the execution of the algorithm are when any processor assigns a value to its

**Lemma 13.5 **
*For any , there is time which is the first moment when there are exactly processors whose parent variables are not
*

`NIL`

, and these processors and their parent variables form a tree rooted at .
**Proof. **We prove the statement of the lemma by induction on . For the base case, assume that . Observe that processor eventually assigns *
NONE
* to its

`NIL`

For the inductive step, suppose that and that the inductive hypothesis holds for . Consider the time which is the first moment when there are exactly processors whose *parent* variables are not *
NIL
*. Because , there is a non-tree processor. But the graph is connected, so there is a non-tree processor adjacent to the tree. (For any subset of processors, a processor is

`NIL`

`NIL`

`NIL`

`NIL`

**Theorem 13.6 **
*Eventually each processor terminates, and when every processor has terminated, the subgraph induced by the parent variables forms a spanning tree rooted at .*

**Proof. **By Lemma 13.5, we know that there is a moment which is the first moment when all processors and their *parent* variables form a spanning tree.

Is it possible that every processor has terminated before time ? By inspecting the code, we see that a processor terminates only after it has received ≤*rejected*≥ or ≤*approved*≥ messages from all its neighbours other than the one to which *parent* link leads. A processor receives such messages only in response to ≤*adopt*≥ messages that the processor sends. At time , there is a processor that still has not even sent ≤*adopt*≥ messages. Hence, not every processor has terminated by time .

Will every processor eventually terminate? We notice that by time , each processor either has already sent or will eventually send ≤*adopt*≥ message to all its neighbours other than the one to which *parent* link leads. Whenever a processor receives ≤*adopt*≥ message, the processor responds with ≤*rejected*≥ or ≤*approved*≥, even if the processor has already terminated. Hence, eventually, each processor will receive either ≤*rejected*≥ or ≤*approved*≥ message on each link to which the processor has sent ≤*adopt*≥ message. Thus, eventually, each processor terminates.

We note that the fact that a processor has terminated does not mean that a spanning tree has already been constructed. In fact, it may happen that processors in a different part of the network have not even received any message, let alone terminated.

**Theorem 13.7 **
*Message complexity of
*

`Flood`

is , where is the number of edges in the graph .The proof of this theorem is left as Problem 13-1.

**Exercises**

13.2-1 It may happen that a processor has terminated even though a processor has not even received any message. Show a simple network and how to delay message delivery and processor computation to demonstrate that this can indeed happen.

13.2-2 It may happen that a processor has terminated but may still respond to a message. Show a simple network and how to delay message delivery and processor computation to demonstrate that this can indeed happen.