Part IV. COMPUTER NETWORKS

Table of Contents

13. Distributed Algorithms
13.1. 13.1 Message passing systems and algorithms
13.1.1. 13.1.1 Modeling message passing systems
13.1.2. 13.1.2 Asynchronous systems
13.1.3. 13.1.3 Synchronous systems
13.2. 13.2 Basic algorithms
13.2.1. 13.2.1 Broadcast
13.2.2. 13.2.2 Construction of a spanning tree
13.3. 13.3 Ring algorithms
13.3.1. 13.3.1 The leader election problem
13.3.2. 13.3.2 The leader election algorithm
13.3.3. 13.3.3 Analysis of the leader election algorithm
13.4. 13.4 Fault-tolerant consensus
13.4.1. 13.4.1 The consensus problem
13.4.2. 13.4.2 Consensus with crash failures
13.4.3. 13.4.3 Consensus with Byzantine failures
13.4.4. 13.4.4 Lower bound on the ratio of faulty processors
13.4.5. 13.4.5 A polynomial algorithm
13.4.6. 13.4.6 Impossibility in asynchronous systems
13.5. 13.5 Logical time, causality, and consistent state
13.5.1. 13.5.1 Logical time
13.5.2. 13.5.2 Causality
13.5.3. 13.5.3 Consistent state
13.6. 13.6 Communication services
13.6.1. 13.6.1 Properties of broadcast services
13.6.2. 13.6.2 Ordered broadcast services
13.6.3. 13.6.3 Multicast services
13.7. 13.7 Rumor collection algorithms
13.7.1. 13.7.1 Rumor collection problem and requirements
13.7.2. 13.7.2 Efficient gossip algorithms
13.8. 13.8 Mutual exclusion in shared memory
13.8.1. 13.8.1 Shared memory systems
13.8.2. 13.8.2 The mutual exclusion problem
13.8.3. 13.8.3 Mutual exclusion using powerful primitives
13.8.4. 13.8.4 Mutual exclusion using read/write registers
13.8.5. 13.8.5 Lamport's fast mutual exclusion algorithm
14. Network Simulation
14.1. 14.1 Types of simulation
14.2. 14.2 The need for communications network modelling and simulation
14.3. 14.3 Types of communications networks, modelling constructs
14.4. 14.4 Performance targets for simulation purposes
14.5. 14.5 Traffic characterisation
14.6. 14.6 Simulation modelling systems
14.6.1. 14.6.1 Data collection tools and network analysers
14.6.2. 14.6.2 Model specification
14.6.3. 14.6.3 Data collection and simulation
14.6.4. 14.6.4 Analysis
14.6.5. 14.6.5 Network Analysers
14.6.6. 14.6.6 Sniffer
14.7. 14.7 Model Development Life Cycle (MDLC)
14.8. 14.8 Modelling of traffic burstiness
14.8.1. 14.8.1 Model parameters
14.8.2. 14.8.2 Implementation of the Hurst parameter
14.8.3. 14.8.3 Validation of the baseline model
14.8.4. 14.8.4 Consequences of traffic burstiness
14.8.5. 14.8.5 Conclusion
14.9. 14.9 Appendix A
14.9.1. 14.9.1 Measurements for link utilisation
14.9.2. 14.9.2 Measurements for message delays
15. Parallel Computations
15.1. 15.1 Parallel architectures
15.1.1. 15.1.1 SIMD architectures
15.1.2. 15.1.2 Symmetric multiprocessors
15.1.3. 15.1.3 Cache-coherent NUMA architectures
15.1.4. 15.1.4 Non-cache-coherent NUMA architectures
15.1.5. 15.1.5 No remote memory access architectures
15.1.6. 15.1.6 Clusters
15.1.7. 15.1.7 Grids
15.2. 15.2 Performance in practice
15.3. 15.3 Parallel programming
15.3.1. 15.3.1 MPI programming
15.3.2. 15.3.2 OpenMP programming
15.3.3. 15.3.3 Other programming models
15.4. 15.4 Computational models
15.4.1. 15.4.1 PRAM
15.4.2. 15.4.2 BSP, LogP and QSM
15.4.3. 15.4.3 Mesh, hypercube and butterfly
15.5. 15.5 Performance in theory
15.6. 15.6 PRAM algorithms
15.6.1. 15.6.1 Prefix
15.6.2. 15.6.2 Ranking
15.6.3. 15.6.3 Merge
15.6.4. 15.6.4 Selection
15.6.5. 15.6.5 Sorting
15.7. 15.7 Mesh algorithms
15.7.1. 15.7.1 Prefix on chain
15.7.2. 15.7.2 Prefix on square
16. Systolic Systems
16.1. 16.1 Basic concepts of systolic systems
16.1.1. 16.1.1 An introductory example: matrix product
16.1.2. 16.1.2 Problem parameters and array parameters
16.1.3. 16.1.3 Space coordinates
16.1.4. 16.1.4 Serialising generic operators
16.1.5. 16.1.5 Assignment-free notation
16.1.6. 16.1.6 Elementary operations
16.1.7. 16.1.7 Discrete timesteps
16.1.8. 16.1.8 External and internal communication
16.1.9. 16.1.9 Pipelining
16.2. 16.2 Space-time transformation and systolic arrays
16.2.1. 16.2.1 Further example: matrix product
16.2.2. 16.2.2 The space-time transformation as a global view
16.2.3. 16.2.3 Parametric space coordinates
16.2.4. 16.2.4 Symbolically deriving the running time
16.2.5. 16.2.5 How to unravel the communication topology
16.2.6. 16.2.6 Inferring the structure of the cells
16.3. 16.3 Input/output schemes
16.3.1. 16.3.1 From data structure indices to iteration vectors
16.3.2. 16.3.2 Snapshots of data structures
16.3.3. 16.3.3 Superposition of input/output schemes
16.3.4. 16.3.4 Data rates induced by space-time transformations
16.3.5. 16.3.5 Input/output expansion
16.3.6. 16.3.6 Coping with stationary variables
16.3.7. 16.3.7 Interleaving of calculations
16.4. 16.4 Control
16.4.1. 16.4.1 Cells without control
16.4.2. 16.4.2 Global control
16.4.3. 16.4.3 Local control
16.4.4. 16.4.4 Distributed control
16.4.5. 16.4.5 The cell program as a local view
16.5. 16.5 Linear systolic arrays
16.5.1. 16.5.1 Matrix-vector product
16.5.2. 16.5.2 Sorting algorithms
16.5.3. 16.5.3 Lower triangular linear equation systems