-
Moving the comment by @ningyuxin1999 under #1186 here for discussion:
Here are the errors I got from it:
I tried with cpu device as well, but got:
I'm not sure if I understood it correctly, could you maybe help? I really appriciate that. Originally posted by @ningyuxin1999 in #1186 (comment) |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 5 replies
-
the PR under which you commented is merged into the Essentially, you have to pass the device only once, i.e., when you create the inference class. Importantly, your custom embedding net should not be on the CUDA device yet, it will be moved internally. net = YourCustomEmbeddingNet(...) # lives on the CPU, output layers returns 10 units
neural_posterior = posterior_nn(model="maf", embedding_net=net, hidden_features=10, num_transforms=2)
inference = SNPE(prior=prior, device="cuda", density_estimator=neural_posterior) Then, The same applies to the data: The remaining code you posted will probably not work, I think there is a misunderstanding. You do not need to code an explicit training loop. All you need to do is appending the data to inference.append_simulations(your_sampled_theta, your_simulated_data)
inference.train()
posterior = inference.build_posterior() (If you really have to use your custom dataloader (e.g., because your data does not fit into the RAM at once), then things become a bit tricky, but there has been a similar issue, see #1193) |
Beta Was this translation helpful? Give feedback.
Given your code snipped, I don't see why this would not work. Maybe your data or your embedding net is still on the GPU?
You can check the device of the net like this: