Message Passing Models I ======================== "The world is a network of talking processes." SLIDE Foster, Figure 1.7 Represented by directed graphs Shared variable model: implicit communication --------------------- P1: a = b <----> P2: b = e MAGIC! Close coupling: processes may interfere at any time without notice! Message passing model: explicit communication --------------------- P1: receive(P2, a) P2: send(P1, e) - Processes operate in local context - Only explicit interaction (communication/synchronization) Main issues =========== How is process network constructed? ----------------------------------- - implicit (MPI, PVM, ...) at program startup one process is running per processor (SPMD, MPMD, multiple programs/multiple data) - explicit (OCCAM, Fortran M) process description and mapping language multiple processes per processor possible - static (OCCAM) number of processes fixed at compile time PAR i = 0 FOR 4 P(i) - dynamic (Transputer C) for (i = 0; i < n; i++) pid[i] = start(P, i) Is there an explicit process interface? --------------------------------------- - channels/ports/message queues/mailboxes P: channel[int] c; start P1(c); start P2(c); P1(c) P2(c) int a; send(c, e) receive(c, a) - Processes don't know of each other! - No "unsuspected" messages can arrive - process identifiers P1: receive(P2, a) P2: send(P1, e) Processes have each others identity! How? P: p1 = start(P1); p2 = start(P2, p1) P1() P2(pid p) receive(ANY, a) send(p, e) - pids must be forwarded explicitly from process creater to all potential senders - Process can be never sure what to receive - processors for (int i = 0; i < P; i++) send(i, e) Only one process per processor! (SPMD/MPMD) - Processor numbers known globally - Process can be never sure what to receive Is message passing strongly typed? ----------------------------------- - OCCAM: typed channels channel[int] c; PAR SEQ int x, y; c ? x; y = x; SEQ c ! 17 Compiler checks correct interfacing!!! - MPI: untyped byte streams MPI_Send(buff, count, size, dest, tag, comm) buff ... address of send buffer count ... number of elements size ... size of elements dest ... destination processor tag ... message tag comm ... communicator - Type mismatch may occur!!! - Representation issues are of concern Is message passing synchronous or asynchronous? ----------------------------------------------- - receive operation blocks process if no value available (special operations for polling) - send operation may block if receiver not ready (OCCAM) - sender side synchronization - no message buffering required - prone to deadlocks par i = 0 to N send((i+1 % N), e) receive(ANY, v) - send operation may continue with sending - message buffering required - finite size of buffers problematic - solution: sending may go ahead receiving ONCE only! Determinism ----------- Is there a possibility to select a communication out of more than one potential senders? OCCAM ALT (x>0) c1 ? x ... (x>0) c2 ? x ... (x<=0) c3 ? x ... c4 ? x ... x>0: c1, c2, c4 may communicate x<=0: c3, c4 may communicate Potential for non-determinism Non-determinist merge merger(array[channel[T]] in, channel[T] out): ALT i = 0 FOR N T x in[i] ? x out ! x OCCAM ===== Transputer language Based on CSP (Communicating Sequential Processes, Hoare) Basic processes ------------- v := e assignment c ! e send c ? v receive SKIP do nothing STOP block forever Types ----- Basic: INT, BOOL, CHAR, ... Arrays: [COUNT]T Channels: CHAN OF T Compound processes ------------------ Type vars: process SEQ process1 process2 IF guarded-process1 guarded-process2 CASE expression value1 process1 value2 process2 WHILE condition process PAR process1 process2 ALT guarded-process1 guarded-process2 Repeated processes ------------------ "REP" [SEQ, PAR, ALT, IF] index = base FOR count process Example ------- SLIDE A pipelined sort program ("A Tutorial Introduction to OCCAM Programming", Pountain and May, 1987) Pipeline of processes (as many processes as numbers) - variables highest and next - number entering process compared with highest - if not larger, forward - if larger, highest set to number - highest forwarded Process Placement ----------------- PLACED PAR i = 0 FOR n PROCESSOR i action Used for programming transputers - multiprocessors - embedded systems, controlers, etc Message Passing Interface (MPI) =============================== Standard interface for message passing programming - library of message passing routines (not a language) - bindings available for C, Fortran, C++, ... - implementations available for multiprocessors and computer networks Program startup - fixed set of processes created at program initialization (outside MPI) - one process per processor - processes may execute different programs (MPMD) MPI Basics ---------- SLIDE Foster, Figure 8.1 MPI_INIT MPI_FINALIZE MPI_COMM_SIZE MPI_COMM_RANK MPI_SEND MPI_RECV Examples -------- Bridge Construction Problem SLIDE Foster, Program 8.1 Pairwise Interactions SLIDE Foster, Program 8.2 Global Operations ----------------- SLIDE Foster, Figure 8.2 Barrrier, data movement, reduction operations MPI_BARRIER MPI_BCAST MPI_GATHER MPI_SCATTER MPI_REDUCE MPI_ALLREDUCE SLIDE Foster, Figure 8.3 Asynchronous communication -------------------------- SLIDE Foster, Figure 8.6 MPI_IPROBE MPI_PROBE MPI_GET_COUNT Modularity ---------- SLIDE Foster, Figure 8.7 MPI_COMM_DUP MPI_COMM_SPLIT MPI_INTERCOMM_CREATE MPI_COMM_FREE SLIDE Foster, Figure 8.8, Figure 8.9 SLIDE Figure 8.10, Figure 8.11