At the end of the MPI section, the student should be able to
- Explain differences in communicating between processes/threads in a shared memory system vs a distributed memory system
- Describe deadlocking communication patterns and approaches to avoid deadlocks
- Contrast blocking and non-blocking communication
- Write MPI programs in C, C++, or Fortran for:
- Communicating data between processes
- Using collective communication calls over a subset of processes
- Compile and run MPI programs in supercomputers
- Start exploring some advanced MPI features relevant for their use case
See demos for demo codes referred to in the slides.
- Message chain
- Heat equation solver: Tasks 1-2
- (Bonus) Parallel pi with any number of processes
- (Bonus) Broadcast and scatter
- Heat equation solver: Task 3
- Cartesian topology
- User-defined datatypes
- Persistent communication
- Heat equation solver: Remaining tasks