## 15.7. 15.7 Mesh algorithms

To illustrate another model of computation we present two algorithms solving the prefix problem on meshes.

### 15.7.1. 15.7.1 Prefix on chain

Let suppose that processor of the chain stores element in its local memory, and after the parallel computations the prefix will be stored in the local memory of . At first we introduce a naive algorithm. Its input is the sequence of elements , and its output is the sequence , containing the prefixes.

`Chain-Prefix(` `)`

```  1   sends  to    2

`IN PARALLEL FOR`

`TO`

3
`FOR`

`TO`

4
`DO`
gets  from , then computes and stores               stores , and sends  to   5   gets  from , then computes and stores
```

Saying the truth, this is not a real parallel algorithm.

Theorem 15.18 Algorithm `Chain-Prefix` determines the prefixes of p elements using a chain in time.

Proof. The cycle in lines 2–5 requires time, line 1 and line 6 requires time.

Since the prefixes can be determined in time using a sequential processor, and , so `CHAIN-Prefix` is not work-effective.

### 15.7.2. 15.7.2 Prefix on square

An algorithm, similar to `Chain-Prefix` , can be developed for a square too.

Let us consider a square of size . We need an indexing of the processors. There are many different indexing schemes, but for the next algorithm `Square-Prefix` sufficient is the one of the simplest solutions, the row-major indexing scheme, where processor gets the index .

The input and the output are the same, as in the case of `Chain-Prefix` .

The processors form the processor row and the processors form the processor column . The input stored by the processors of row is denoted by , and the similar output is denoted by .

The algorithm works in 3 rounds. In the first round (lines 1–8) processor rows compute the row-local prefixes (working as processors of `Chain-Prefix` ). In the second round (lines 9–17) the column computes the prefixes using the results of the first round, and the processors of this column send the computed prefix to the neighbour . Finally in the third round the rows determine the final prefixes.

`Square-Prefix(` `)`

```  1

`IN PARALLEL FOR`

`TO`

2
`DO`
sends  to    3

`IN PARALLEL FOR`

`TO`

4
`FOR`

`TO`

5
`DO`
gets  from , then computes and   6          stores , and sends  to    7

`IN PARALLEL FOR`

`TO`

8
`DO`
gets  from , then computes and stores    9   sends  to   10

`IN PARALLEL FOR`

`TO`

11
`FOR`

`TO`

12
`DO`
gets  from , then computes and stores                 stores , and sends  to  13   gets  from , then computes and stores   14

`IN PARALLEL FOR`

`TO`

15
`DO`
send  to   16

`IN PARALLEL FOR`

`TO`

17
`DO`
sends  to   18

`IN PARALLEL FOR`

`DOWNTO`

19
`FOR`

`TO`

20
`DO`
gets  from , then computes and  21          stores , and sends  to   22

`IN PARALLEL FOR`

`TO`

23
`DO`
gets  from , then computes and stores
```

Theorem 15.19 Algorithm `Square-Prefix` solves the prefix problem using a square of size , major row indexing in time.

Proof. In the first round lines 1–2 contain 1 parallel operation, lines 3–6 require operations, and line 8 again 1 operation, that is all together operations. In a similar way in the third round lines 18–23 require time units, and in round 2 lines 9–17 require time units. The sum of the necessary time units is .

Example 15.8 Prefix computation on square of size Figure 15.23(a) shows 16 input elements. In the first round `Square-Prefix` computes the row-local prefixes, part (b) of the figure show the results. Then in the second round only the processors of the fourth column work, and determine the column-local prefixes – results are in part (c) of the figure. Finally in the third round algorithm determines the final results shown in part (d) of the figure.

 `CHAPTER NOTES`

Basic sources of this chapter are for architectures and models the book of Leopold [], and the book of Sima, Fountaine and Kacsuk [], for parallel programming the book due to Kumar et al. [] and [], for parallel algorithms the books of Berman and Paul, [] Cormen, Leiserson and Rivest [], the book written by Horowitz, Sahni and Rajasekaran [] and the book [], and the recent book due to Casanova, Legrand and Robert [].

The website [] contains the Top 500 list, a regularly updated survey of the most powerful computers worldwide []. It contains 42% clusters.

Described classifications of computers are proposed by Flynn [], and Leopold []. The Figures 15.1, 15.2, 15.3, 15.4, 15.5, 15.7 are taken from the book of Leopold [], the program 15.6 from the book written by Gropp et al. [].

The clusters are characterised using the book of Pfister [], grids are presented on the base of the book and manuscript of Foster and Kellerman [], [].

With the problems of shared memory deal the book written by Hwang and Xu [], the book due to Kleiman, Shah, and Smaalders [], and the textbook of Tanenbaum and van Steen [].

Details on concepts as tasks, processes and threads can be found in many textbook, e.g. in [], []. Decomposition of the tasks into smaller parts is analysed by Tanenbaum and van Steen [].

The laws concerning the speedup were described by Amdahl [], Gustafson-Barsis [] and Brent []. Kandemir, Ramanujam and Choudray review the different methods of the improvement of locality []. Wolfe [] analyses in details the connection between the transformation of the data and the program code. In connection with code optimisation the book published by Kennedy and Allen [] is a useful source.

The MPI programming model is presented according to Gropp, Snir, Nitzberg, and Lusk [], while the base of the description of the OpenMP model is the paper due to Chandra, Dragum, Kohr, Dror, McDonald and Menon [], further a review found on the internet [].

Lewis and Berg [] discuss pthreads, while Oaks and Wong [] the Java threads in details. Description of High Performance Fortran can be found in the book Koelbel et al. []. Among others Wolfe [] studied the parallelising compilers.

The concept of PRAM is due to Fortune and Wyllie and is known since 1978 []. BSP was proposed in 1990 by Valiant []. LogP has been suggested as an alternative of BSP by Culler et al. in 1993 []. QSM was introduced in 1999 by Gibbons, Matias and Ramachandran [].

The majority of the pseudocode conventions used in Section 15.6 and the description of crossover points and comparison of different methods of matrix multiplication can be found in [].

The Readers interested in further programming models, as skeletons, parallel functional programming, languages of coordination and parallel mobile agents, can find a detailed description in []. Further problems and parallel algorithms are analysed in the books of Leighton [], [] and in the chapter Memory Management of this book [] and in the book of Horowitz, Sahni and Rajasekaran []. A model of scheduling of parallel processes is discussed in [], [], [].

Cost-optimal parallel merge is analysed by Wu and Olariu in []. New ideas (as the application of multiple comparisons to get a constant time sorting algoritm) of parallel sorting can be found in the paper of Gararch, Golub and Kruskal [].