-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use collective writes for VTU files #294
Use collective writes for VTU files #294
Conversation
This seems good to do testing. Before it gets merged, we may want to check whether we may use sc_io_write_all throughout and do not need to rely on the #ifdef P4EST_ENABLE_MPIIO anymore. |
Let's just merge this. This code has been tested well by @tim-griesbach. |
Currently, the function |
In the implementation of the parallel writing of VTU files ForestClaw used
MPI_File_write
for distributed data, e.g. coordinates. There is the optimized and collective functionMPI_File_write_all
, which is optimized for writing parallel distributed data.This PR uses the function
MPI_File_write_all
in places where we write parallel distributed data. Moreover, we introduce a buffering mechanism for patches during writing VTU files. This is a side effect of ensuring that the collective functionMPI_File_write_all
is called equally many times on each rank. The buffering mechanism offers the variablepatch_threshold
infclaw2d_vtk_state_t
. By defaultpatch_threshold
is set to-1
, i.e. we buffer all patch data for eachfclaw2d_vtk_write_field
call and then write the data to disk. Alternatively, one can changepatch_threshold
in the VTK state to a number strictly greater0
and onlypatch_threshold
many patches are buffered and if the threshold is exceeded the data is written to disk.Before there was one writing operation per patch. Now this is not generally true anymore since the number of writing operations depends on the
patch_threshold
.We appreciate any feedback! In particular concerning the impact on the parallel writing performance.