Test MPI/CUDA polling in tasks instead of integrating with scheduler #1172
Labels
category: senders/receivers
P2300
effort: 3
A few days of work.
effort: 4
A few weeks of work.
priority: low
Nice to have, but nobody is going to be sad if this is never done.
type: cleanup
type: refactoring
We should test if running the MPI/CUDA polling in standalone tasks works as well as integrating the polling in the scheduler. This would make the polling independent of the scheduler that it's running on and possibly make integration with other libraries easier. In addition it might make shutdown a bit simpler since we don't have to separately keep track of task counts, MPI request counts, and CUDA event counts; everything would fall under the task count. This might also make the polling slower. This also similar to what is proposed in ALPI.
The text was updated successfully, but these errors were encountered: