-
-
Notifications
You must be signed in to change notification settings - Fork 127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add save_model
and load_model
functions (or something similar)
#259
Comments
@aloctavodia pointed that we can already save fitted models via arviz.to_netcdf and loaded with arviz.from_netcdf. |
I would like to add this feature soon, and I've been thinking that dill is a good candidate. However, a model has two "independent" objects associated with it: the |
Something that actually might make more sense for Bambi is a json or yaml
format that records the model priors and formulae string, and then
reconstructs it. This would be diffable and could be saved to the netcdf
file in the form of a string. I havent thought through this much but just
posting the idea here
…On Thu, Apr 8, 2021 at 6:55 PM Tomás Capretto ***@***.***> wrote:
I would like to add this feature soon, and I've been thinking that dill
<https://github.com/uqfoundation/dill> is a good candidate. However, a
model has two "independent" objects associated with it: the Model
instance itself and the InferenceData object that Model.fit() returns. I
think this functionality would make more sense if it could be used like save_model(object,
path) and load_model(path). I'm opening a new issue with some ideas about
how we could achieve this.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#259 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABXBFYNBYP7X62BTRCMXXY3THZNALANCNFSM4TCJWGFA>
.
|
Makes much sense! Much better than my proposal! I'll try to write something |
I'm realizing that we have more than a formula and the prior description. We also have a pandas DataFrame, the Bambi Term instances a formulae.DesignMatrices object. The design matrices and the terms could be re-constructed from the formula, the prior description, and the pandas DataFrame, but I maybe problems can arise? For example, what if you have a model built and saved with one version of Bambi, and then you load the description with another version of Bambi where something has changed. I don't know, just thinking out loud here. |
Hi, I don't have much to contribute to this topic but I do have this question. Is there a (recommended) way how to save and load models after fitting them? Let's say you want to run the model, save and then later come to play with visualisations or inspections and you don't want to rerun it again. |
Things have sort of changed for the better here -- I think you could save the inference results using .to_netcdf and .load, respectively on the inference data object. A nice thing about xarray/netCDF/arviz (they're all kind of the same thing when it comes to |
We're not offering anything at the moment unfortunately. But I think I can help you with some ideas. When we talk about "saving and loading the model" we need to keep in mind there are two things we usually work with:
These two objects differ in some respects regarding saving/loading.
You could write a small program that creates a Bambi model and then checks if a specific Is that the ideal approach? I don't think so. If you need things that require interacting with the underlying PyMC model after getting posterior draws, the PyMC model will be recompiled each time. However, that is usually much cheaper than getting draws from the posterior all the time. |
PyMC3 allows to save traces via
pm.save_trace()
and they can be loaded viapm.load_trace()
.I think that having functionality to save and load model objects will favor interactivity when working with Bambi models.
It is already happening to me that every time I reset my session I need to run the samplers again and it is annoying.
The text was updated successfully, but these errors were encountered: