Batch learning #1193
-
Hi all,
2.5) and 3) causing issues with this package. 2.5) How can I save and load the weights of the density estimation network?
Help is very much appreciated, |
Beta Was this translation helpful? Give feedback.
Replies: 6 comments 2 replies
-
Hi there, thanks for reaching out! 2.5.: See here for how to save and load posteriors. Michael |
Beta Was this translation helpful? Give feedback.
-
Just another quick question: Is there a way to save the weights of the density estimator or more precisely, can I concatenate the embedding_net after I build the posterior_nn? I want to pretrain both the embedding_net and the density estimator and then train both together due to the complexity of my problem. Is there a way of doing so? |
Beta Was this translation helpful? Give feedback.
-
There is no easy way to do this, you will have to hack this in somehow. |
Beta Was this translation helpful? Give feedback.
-
Ok I got it to work now with deleting the data out of the inference class. I also tried writing my own trainingsloop but I ran into serious problems and due to the lack of documentation of this version I felt lost soon (I was using an embedding-net and couldn't get the dimensions of the conditions working). If I may give a recommendation for future versions: |
Beta Was this translation helpful? Give feedback.
-
Thanks for the suggestion with the dataloader, that's good to know. Can you please create an issue / feature request for this? |
Beta Was this translation helpful? Give feedback.
-
Hello, Have the features discussed here already been implemented into SBI? I am using pairs of ground truth parameters and time series data to train a density estimator in an amortized way (NPE). For some complex and high-dimensional models, I need a very large number of datasets to achieve proper training. Is it possible to resume the training when the previous round has converged and, at the same time, remove the old datasets and introduce a new batch without losing or restarting the network parameters? If this is possible, I have a follow-up clarification question. What happens to the learning rate in between the training rounds? Is there a way to control the learning rate in a way that it is not restarted each round a new dataset is introduced? Thank you so much for your support! |
Beta Was this translation helpful? Give feedback.
Hi there,
thanks for reaching out!
2.5.: See here for how to save and load posteriors.
3. If you are on GPU, then do
append_simulations(..., data_device="cpu")
, see here. Removing data is not easily supported, but you could remove the data from the._theta_roundwise
,._x_roundwise
, and._prior_masks_roundwise
, see here. Writing your own training loop is possible only if you are working on the github version ofsbi
. This is not released on pypi yet. Here is a tutorial on how to do that.Michael