Introduction to Parallel Computation
315.394, WS 96/97 (Start: October 7)
Mo 8.30-10.00, T811
The design of parallel algorithms is also influenced by the view that the
programmer has of the underlying parallel execution model. It is important for
him to know about the principles of parallel architectures, their basic
features, their possibilities and their limitations.
This course will provide some perspective on currently available parallel
hardware and software systems. Rather than to enumerate machines and
languages, we will provide an integrated view of parallel hardware and
software and sketch recent developments and trends.
The course will be based on practical programming work on a
vector-supercomputer, a Sequent
shared memory SMP (symmetric multiprocessor), and an
distributed memory MPP (massively parallel processor). Also a paper&pencil
program will be developed for the Maspar
MP-1 SIMD (single instruction
multiple data) array computer. Experience in C programming is assumed.
315.525: Parallel Languages and Programming
- Michael J. Quinn
- Parallel Computing -- Theory and Practice,
2nd edition, McGraw-Hill, New York, NY, 1994.
- Ian T. Foster
- Designing and Building Parallel Programs --
Concepts and Tools for Parallel Software
Engineering, Addison Wesley, Reading, MA, 1995.
- Kai Hwang
- Advanced Computer Architecture: Parallelism,
Scalability, Programmability, McGraw-Hill, New York, 1993.
Parallel machine models, parallel programming models, a parallel algorithm.
- Vectorization I
Pipelining, vector architectures, vector instructions, chaining, vector
libraries, vector statements, vectorizing compilers.
- Vectorization II
Convex 3440 architecture and programming, vectorization, strip mining, loop
distribution, loop interchange, temporal and spatial parallelism.
- Data-Parallel Programming I
Data parallelism, SIMD architectures, Maspar MP-1 architecture and
- Data-Parallel Programming II
Mesh algorithms, cube algorithms, mapping strategies.
- Parallel Program Design
Methodological design, partitioning, communication, agglomeration, mapping,
- Shared Memory Programming I
Shared memory architectures, automatic scheduling, cache policies,
synchronization mechanisms, Unix programming.
- Shared Memory Programming II
Sequent Symmetry architecture and programming, micro-tasking library, data
decomposition policies, algorithm examples.
- Performance of Parallel Programs
Speedup, efficiency, Amdahl's law, scaled speedup, Gustavson's law, load
- Message Passing Programming I
Distributed memory systems, workstation clusters, static interconnection
topologies, switched networks, whormhole routing.
- Message Passing Programming II
nCube/2 architecture and programming, row/column oriented algorithms,
block-oriented algorithm, mapping, Gray codes.
- Scheduling and Performance Analysis
Static scheduling, Graham's scheduling algorithm, deadlocks, performance
- Exercise 1
- Exercise 2
- Exercise 3
Shared memory programming.
- Exercise 4
Message passing programming.
Maintained by: Wolfgang Schreiner
Last Modification: March 19, 1997