Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Best way of using MAP/ML point when fitting #437

Open
segasai opened this issue Apr 14, 2023 · 4 comments
Open

Best way of using MAP/ML point when fitting #437

segasai opened this issue Apr 14, 2023 · 4 comments
Labels
help wanted help! question questions about stuff

Comments

@segasai
Copy link
Collaborator

segasai commented Apr 14, 2023

This is an open-ended question.

Often there are situations when there is a known Maximum likelihood or MAP point, but one is still interested in sampling the posterior around it. When using things like emcee it's trivial, you just sample a ball around that point and use that to start the sampling. The question is whether there is a way of somehow doing something similar with dynesty, which doesn't necessarily involve running a full nested sampling. Obviously these runs won't be very useful for evidence calculations.

Possible ideas.

  • Using a small number of live points and initialize one of the livepoints to the ML location.
  • Define a ball/ellipsoid/cube in the posterior space around the ML point and sample starting from a uniform distribution inside that ball.
@segasai segasai added question questions about stuff help wanted help! labels Apr 14, 2023
@lalit-pathak
Copy link

So if we define a ball/ellipsoid/cube in the posterior space around the ML point, How do we exactly set the boundaries? In the case of small boundaries, we could get railings in the posteriors, right? In that case, we would either need to repeat the exercise for various boundaries or make an effective fisher matrix covering some given volume of the posterior.

@segasai
Copy link
Collaborator Author

segasai commented Jun 14, 2023

My thinking was that if we define an ellipsoid around the MAP value and then maybe sample from the prior that has 99% of the volume inside the ellipsoid and 1% outside. This way the majority of the sampling will be focused on the ellipsoid, but if there is substatial posterior volume outside, it'll likely still be captured. But this is a vague idea, I am not sure it's implementable.

Specifically if x is parameter within the unit Cube then the posterior is just
$1/Z* L(x) $
, but now if we adopt the prior
$\pi(x)$
so with the volume requirement given above and then we'd sample the posterior of the form
$\pi(x) * (1/z * 1/\pi(x) * L(x))$
This is technically the same posterior as before, but the sampling will mostly avoid low L regions.

The problem is I'm not sure there is a parameter transformation implementing this kind of prior.

@mvsoom
Copy link

mvsoom commented Jan 8, 2024

One way of using the MAP approximation is to fit a MVN to it (Laplace approximation) and use that as a proposal distribution, to be incorporated into the prior, much like the expressions in your previous reply. Here is a fine short paper exploring this idea: https://arxiv.org/pdf/2212.01760.pdf.

@lalit-pathak
Copy link

@mvsoom Thanks for posting this nice paper here. I will look into it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted help! question questions about stuff
Projects
None yet
Development

No branches or pull requests

3 participants