-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unreasonably large uncertainties: Constraining normalizing parameters for Multinest #175
Comments
To add to this, one should be able to use a more informed prior than the DE results if available. |
Thanks for the comment. I started to explain to Tellef how I would approach it and asked him to raise/document the issue on github for easier access. The current layout of the Normalizer classes makes it a bit more complicated than necessary, as one has to redefine the whole Changing inner functions is rather obscure, so either one has to copy/past the whole @vetlewi: Do you have a child class where you inherited from one of the Normalizes? If making changes, I don't want to break existing code of yours. [It's anyhow questionable whether I manage this before my defense in March.] |
Btw, here is a bit more of the problem. Multinest finds several modes, of which the most likely mode (very small alpha) is "bad". It is so obviously "bad" that I proposed to Tellef to just use a harder constrain on the prior then my default. Edit:
|
My code re-implements the normalizer (mostly copy-paste :p). Wont be breaking any of my code :) |
Ok, great. I'm looking forward to seeing it your results, you've done a lot of good additions. Please consider writing them up as a small follow up paper (I think they call it new version announcement ?) 👍 ! Tell me if I can help you with that... I'll see about the priorities one I'm done. Otherwise, of course @tellefs is welcome to write up a PR with the refactored code. |
At the moment I dont know enough about how multinest functions, to make something satisfying. Not sure if this is something i should create a PR for? |
You're right, that's not something that would/should be pulled back to the main repo here, but exist on your own fork. If you wanted to, you could propose a change to the code where the |
For future reference, here are the Ensemble Normalized plots i got. As one can see, the orange median looks "good", but the errors are huge, due to the "wrong" alpha guesses. As mentioned above, I worked around it by making a Child Class of the Parent Class "NormalizerSimultan", and changed the "prior" function inside the "optimize" function. I made the alpha "cube" to be between 1 and 2. These values were picked by looking at the first plot Fabio posted, where the alpha plot looks gaussian-like at around 1.75. |
@tellefs & @fzeiser for nld, gsf in zip(extractor.nld, extractor.gsf):
nld.transform(alpha=±0.3, inplace=True) # If nld.units == 'keV' use alpha=±0.3/1e3
gsf.transform(alpha=±0.3, inplace=True) # If gsf.units == 'keV' use alpha=±0.3/1e3 |
This is a bit funky, because I would have assumed that alpha=-1.0 or -1.5 transformation before running the normalizer should cut off the lower mode. Are you sure you didn't have some "hidden states" in the notebook? How does the marginals plot look like? -- In the end, I think it is easier to understand if you just change the prior, but now that we observe the problem, it's be interesting to see... |
Marginals plot included above. Also, i restarted the kernel before running, so there should be no hidden states in the notebook? |
|
For these results I did not change the prior. DE results printed below: 2021-02-05 11:58:08,314 - ompy.normalizer_nld - INFO - DE results: |
With the DE results, one can constrain the normalizing parameters for by e.g. om.NormalizerNLD.bounds['T'] = ...
It would be preferable to be able to do this with the multinest normalization parameters too.
The text was updated successfully, but these errors were encountered: