You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to run LULESH https://github.com/LLNL/LULESH in SST Macro. And got the following error. In configuration file I used debug = [mpi] to print out the MPI debugging lines. I first collected traces of LULESH using dumpi then simulated those dumpi traces on SST macro 12.0.0. I used the latest SST dumpi https://github.com/sstsimulator/sst-dumpi to collect the traces. And tried both OpenMPI/4.1.1 and MPICH/3.3.2 for collecting traces. In both of these two MPI implementations I got the same error.
Another thing to mention is that for miniVite proxy application (https://github.com/Exa-Graph/miniVite), we got lucky. Switching from OpenMPI/4.1.1 to MPICH/3.3.2 made this error go away. We would like to understand the reason behind this issue, since MPI_Request is an opaque handle, and differences in the underlying implementation should not change behavior.
Is there any workaround for this issue? We are actually encountering this for multiple applications. (We have also tried the pnnl-branch, but facing the same error.)
New Issue for sst-macro
I tried to run LULESH https://github.com/LLNL/LULESH in SST Macro. And got the following error. In configuration file I used debug = [mpi] to print out the MPI debugging lines. I first collected traces of LULESH using dumpi then simulated those dumpi traces on SST macro 12.0.0. I used the latest SST dumpi https://github.com/sstsimulator/sst-dumpi to collect the traces. And tried both OpenMPI/4.1.1 and MPICH/3.3.2 for collecting traces. In both of these two MPI implementations I got the same error.
Another thing to mention is that for miniVite proxy application (https://github.com/Exa-Graph/miniVite), we got lucky. Switching from OpenMPI/4.1.1 to MPICH/3.3.2 made this error go away. We would like to understand the reason behind this issue, since MPI_Request is an opaque handle, and differences in the underlying implementation should not change behavior.
I used the following dragonfly.ini configuration file to run the trace
The text was updated successfully, but these errors were encountered: