-
Notifications
You must be signed in to change notification settings - Fork 79
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Best way of using MAP/ML point when fitting #437
Comments
So if we define a ball/ellipsoid/cube in the posterior space around the ML point, How do we exactly set the boundaries? In the case of small boundaries, we could get railings in the posteriors, right? In that case, we would either need to repeat the exercise for various boundaries or make an effective fisher matrix covering some given volume of the posterior. |
My thinking was that if we define an ellipsoid around the MAP value and then maybe sample from the prior that has 99% of the volume inside the ellipsoid and 1% outside. This way the majority of the sampling will be focused on the ellipsoid, but if there is substatial posterior volume outside, it'll likely still be captured. But this is a vague idea, I am not sure it's implementable. Specifically if x is parameter within the unit Cube then the posterior is just The problem is I'm not sure there is a parameter transformation implementing this kind of prior. |
One way of using the MAP approximation is to fit a MVN to it (Laplace approximation) and use that as a proposal distribution, to be incorporated into the prior, much like the expressions in your previous reply. Here is a fine short paper exploring this idea: https://arxiv.org/pdf/2212.01760.pdf. |
@mvsoom Thanks for posting this nice paper here. I will look into it. |
This is an open-ended question.
Often there are situations when there is a known Maximum likelihood or MAP point, but one is still interested in sampling the posterior around it. When using things like emcee it's trivial, you just sample a ball around that point and use that to start the sampling. The question is whether there is a way of somehow doing something similar with dynesty, which doesn't necessarily involve running a full nested sampling. Obviously these runs won't be very useful for evidence calculations.
Possible ideas.
The text was updated successfully, but these errors were encountered: