Introduction to Parallel Computation
|
|
315.394, WS 96/97 (Start: October 7)
Mo 8.30-10.00, T811
Wolfgang Schreiner
The design of parallel algorithms is also influenced by the view that the
programmer has of the underlying parallel execution model. It is important for
him to know about the principles of parallel architectures, their basic
features, their possibilities and their limitations.
This course will provide some perspective on currently available parallel
hardware and software systems. Rather than to enumerate machines and
languages, we will provide an integrated view of parallel hardware and
software and sketch recent developments and trends.
The course will be based on practical programming work on a
Convex C3440
vector-supercomputer, a Sequent
Symmetry
shared memory SMP (symmetric multiprocessor), and an
nCube/2
distributed memory MPP (massively parallel processor). Also a paper&pencil
program will be developed for the Maspar
MP-1 SIMD (single instruction
multiple data) array computer. Experience in C programming is assumed.
- Michael J. Quinn
- Parallel Computing -- Theory and Practice,
2nd edition, McGraw-Hill, New York, NY, 1994.
- Ian T. Foster
- Designing and Building Parallel Programs --
Concepts and Tools for Parallel Software
Engineering, Addison Wesley, Reading, MA, 1995.
- Kai Hwang
- Advanced Computer Architecture: Parallelism,
Scalability, Programmability, McGraw-Hill, New York, 1993.
315.525: Parallel Languages and Programming
Models
- Introduction
(PostScript Slides)
-
Parallel machine models, parallel programming models, a parallel algorithm.
- Vectorization I
(PostScript Slides)
-
Pipelining, vector architectures, vector instructions, chaining, vector
libraries, vector statements, vectorizing compilers.
- Vectorization II
(PostScript Slides)
-
Convex 3440 architecture and programming, vectorization, strip mining, loop
distribution, loop interchange, temporal and spatial parallelism.
- Data-Parallel Programming I
(PostScript Slides)
-
Data parallelism, SIMD architectures, Maspar MP-1 architecture and
programming.
- Data-Parallel Programming II
(PostScript Slides)
-
Mesh algorithms, cube algorithms, mapping strategies.
- Parallel Program Design
(PostScript Slides)
-
Methodological design, partitioning, communication, agglomeration, mapping,
modularity, examples.
- Shared Memory Programming I
(PostScript Slides)
-
Shared memory architectures, automatic scheduling, cache policies,
synchronization mechanisms, Unix programming.
- Shared Memory Programming II
(PostScript Slides)
-
Sequent Symmetry architecture and programming, micro-tasking library, data
decomposition policies, algorithm examples.
- Performance of Parallel Programs
(PostScript Slides)
-
Speedup, efficiency, Amdahl's law, scaled speedup, Gustavson's law, load
balancing.
- Message Passing Programming I
(PostScript Slides)
-
Distributed memory systems, workstation clusters, static interconnection
topologies, switched networks, whormhole routing.
- Message Passing Programming II
(PostScript Slides)
-
nCube/2 architecture and programming, row/column oriented algorithms,
block-oriented algorithm, mapping, Gray codes.
- Scheduling and Performance Analysis
(PostScript Slides)
-
Static scheduling, Graham's scheduling algorithm, deadlocks, performance
tools.
- Exercise 1
(Handout,
Fortran Program)
-
Vectorization.
- Exercise 2
(Handout,
C Program)
-
Data-paralell programming.
- Exercise 3
(Handout, C
Program)
-
Shared memory programming.
- Exercise 4
(Handout, C
Program)
-
Message passing programming.
Maintained by: Wolfgang Schreiner
Last Modification: March 19, 1997
[Up]
[RISC-Linz] [University]
[Search]