-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update posterior sampling method #1279
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #1279 +/- ##
===========================================
- Coverage 89.24% 78.34% -10.91%
===========================================
Files 119 119
Lines 8695 8705 +10
===========================================
- Hits 7760 6820 -940
- Misses 935 1885 +950
Flags with carried forward coverage won't be shown. Click here to find out more. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, thanks!
Regarding the increased sampling time: for the notebook that's okay. But please keep the fast version in the tests. (the reason it is slow is that posterior.sample_batched()
draws 10k samples for every x_o, because some of them might get rejected. We will change this in the future, and the notebook will then also be fast).
Regarding the failing test: Any idea on why this test is failing now (but was running before)? Just different seeding?
Thanks for you comment @michaeldeistler ! I removed he commented line to only leave the new As for the failing test: I did some tests and saw that the used |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot Julia!
What does this implement/fix? Explain your changes
posterior.sample()
instead ofdenisty_estimator_sample()
.Does this close any currently open issues?
no
Any relevant code examples, logs, error output, etc?
Code of cell [5] in the tutorial (see comment below):
Any other comments?
lc2st_nf
tests fails. The reason is that the-nf
version on this example it is too discriminative, i.e. the false positive rate is not low enough (9%>5%, too many rejections for a “good” estimator). I could solve this problem by either training it on data from the true distribution instead of the estimator or simply not consideringlc2st_nf
for this test.Checklist
Put an
x
in the boxes that apply. You can also fill these out after creatingthe PR. If you're unsure about any of them, don't hesitate to ask. We're here to
help! This is simply a reminder of what we are going to look for before merging
your code.
x
] I have read and understood the contributionguidelines
x
] I have commented my code, particularly in hard-to-understand areasx
] I have added tests that prove my fix is effective or that my feature worksx
] I have reported how long the new tests run and potentially marked themwith
pytest.mark.slow
.x
] I performed linting and formatting as described in the contributionguidelines
x
] There are no conflicts with themain
branchFor the reviewer: