-
-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add mapping tester repetitions #195
base: develop
Are you sure you want to change the base?
Conversation
I am wondering if I should write a documentation page in |
dc2679c
to
eb1c09b
Compare
eb1c09b
to
ef98748
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What I like about this approach:
- the repetitions are part of the json file
what I don't like about this approach:
- the hierarchy of the generated run-tree becomes yet another level steeper
- each run (with the exact same configuration) is duplicated in another directory. This is particularly problematic in terms of memory. In the current setup, having larger runs can easily lead to
$\mathcal{O}(10)GB$ . Having 5 repetitions then would lead to an excessive memory usage for no reason.
Now there is another suggestion of making the generation of output files (which occupy this memory) optional. However, the use case of the mapping-tester is to test mappings, which have error and runtime as their integral properties. One of the main features of all this is in fact that it measures both with one execution. Therefore, disabling the error seems rather like a different use case.
In my previous setups, I simply repeated the whole pipeline "repetitions" amount of times to generate the same statistical data for individual runs. Thereby it gets rid of the two downsides I mentioned above, but comes at the disadvantage that temporarily generated profiling data of individual runs is not accessible after all repetitions are completed.
I think a clean approach here would be to have the same directory hierarchy as we have right now, and store the profiling information of each run in individual precice-profiling-<N>
directories. Then, we would have (the same) memory problems as before, avoid duplicating the same runs and directory structures while keeping the profiling information persistent.
True. In practise this doesn't matter though. The directory structure is consistent, and it is easy to use globbing expressions or tools like Currently, we extract some job information from the path, but fundamentally, this could also go into a
I assume that you mean storage, which depends on the goal of using the tool:
The mapping-tester was build to test accuracy and runtime for the mappings section of the preCICE v2 paper. Back then, I thankfully didn't run into storage issues, but the process was nevertheless wasteful in terms of storage and runtime (pointless post-processing).
Reusing the same case files for multiple runs comes at the severe downside, that post-processing needs to be interleaved with the runs, which requires the entire job to wait. This is wasteful in terms of project CPUh and needlessly blocks the partition. Especially so, as post-processing isn't even parallelized. This feature has always been part of the purpose of this tool. Up to now, we only got away with not having it built-in. |
You successfully summarized the issue: both are a concern.
That's not the case, please read again. If I have a profiling directory for each repetition, then I can handle everything afterwards. |
I am trying to make the point that not necessarily both are a concern at the same time. If I am interested only in measuring runtime then I shouldn't have to pay for measuring mapping-accuracy.
I get it now. You essentially move the “repetitions” loop into the run script of a case itself like this: for run in range(repetitions)
runCase()
move("precice-profiling", f"precice-profiling-{run}") So no interleaving of run and post-processing 👍 There are still some open points:
Can you point to a branch which uses your current workflow? It would be intetesting to see an implementation of this. |
My first idea was to make use of the
Yes, it is unnecessary IO, but not any additional storage usage. The output mesh files generated in the first run will be overridden later on. Having the IO multiple times shouldn't hurt the runtime though.
each measurement should give the same result, right?
I see what you have in mind. What I did last time was generating separate statistics files for each run and later on perform the averaging over all generating statistics files. Another option would be to load all frames immediately in each case and then aggregate the (e.g.) mean timing. Anyway, one would look at a single timing or derived measurements from the timings, it's mostly a question where to aggregate the data then. That's what I did last time, after storing each run as a separate statistics file https://github.com/davidscn/aste/blob/mapping-paper-setups/tools/mapping-tester/aggregate-timings.py |
ef98748
to
7adf960
Compare
Main changes of this PR
This PR:
I have a further change coming up in which I make writing the output mesh on B optional. This prevents excessive use of storage. #197
Based on #210
Author's checklist
pre-commit
hook and usedpre-commit run --all
to apply all available hooks.docs/README.md
.precice/tutorials/aste-turbine
.