See the earlier message chain exercise for the description of the message chain.
-
Implement the program using non-blocking communication, i.e.
MPI_Isend
,MPI_Irecv
, andMPI_Wait
. UtilizeMPI_PROC_NULL
when treating the special cases of the first and the last task. You may start from scratch, or use the skeleton code or your solution from the earlier message chain exercise as a starting point. -
The skeleton code prints out the time spent in communication. Investigate the timings with different numbers of MPI tasks (e.g. 2, 4, 8, 16, ...). Compare the results to the implementation with
MPI_Send
s andMPI_Recv
's and pay attention especially to rank 0. Can you explain the behaviour? -
Write a version that uses
MPI_Waitall
instead ofMPI_Wait
s.