-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merging TuringBenchmarking
into DynamicPPL
#715
Comments
I had an unsubstantiated opinion on Slack that this should be a separate repo. The main reason is to avoid the DPPL repo itself becoming a home for all sorts of things – while we might not have much 'utility' code right now, this can grow in the future, and it's harder to remove code than to add it in. I can see the point about maintainability being harder – for example having to keep the repo up to date with DPPL version releases. One way to keep us on track with this is the docs repo – if we have a doc page explaining these debugging / profiling tools (which we really should have!), and DPPL is bumped in the docs environment, then it will force us to keep the profiling repo up to date as well. (the docs repo is actually doing a good job of forcing us to iron out stuff like Bijectors right now!) |
Also agree with @penelopeysm here 👍 |
We don't generally consider the amount of code when deciding whether to create a new repo or package. As long as the code is organised and readable, it is okay. In reality, IIRC, the TuringBenchmarks repo was created from some small utilities that @torfjelde used in personal workflows. Then, it was rushed into a package for a winter school so that people could use it following Julia's Pkg workflow. The code in EDIT: My primary metric when deciding on new packages is isolating complexity and encouraging reusability or stability. For example, developing and debugging MCMC samplers is often very challenging, so we decided to isolate them from the extra complexity of DynamicPPL. |
This is not strictly true. It was somewhat rushed into having its first release; it was a module from the get-go, and made so because I thought the initial thing I had been doing, i.e. putting benchmarking in
I guess the question is exactly how this is to be done? Do you mean to make a new submodule of DPPL that contains benchmarking tools, e.g. My original motivation for making the benchmarking a separate package (and also why I'm still of the opinion that this is the way to go) is:
|
DynamicPPL's tools for profiling and debugging models are growing.
TuringBenchmarking is very lightweight, and its scope overlaps with growing utilities inside DynamicPPL, so it makes sense to consider merging. Besides, these benchmarking codes will receive better maintenance under DynamicPPL.
The extra deps like ReverseDiff, Zygote, PrettyTables can be removed or managed via weak deps.
The text was updated successfully, but these errors were encountered: