-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sampling noise prior #29
Comments
So I don't really have thoughts and more just use the recommended parameterization in the stan community. I would usually do Broadly, I think this is also what most other people typically do. At least that I have seen. |
Yeah I can see why they do that from the link. |
Yeah it makes good sense and the PC prior math is super cool. My concern is that (at least in these settings) it doesn't seem to be working that well empirically. It seems like the prior is generally much stronger than the information in the data and the parameter is generally poorly sampled. Now that I'm thinking it through, perhaps it does makes sense to go with the recommended Stan default and see if any of the latent processes makes a difference in sampling. Perhaps worth revisiting this issue after all the latent processes are set up and we can look at prior predictive distributions? |
I think this is more a general case of trying to estimate overdispersion in these models vs the specific transform. I more think the conclusion is that a tighter prior on I am happy to stick with what we have here or to revisit. What is the current setting in the ww work? Perhaps we can agree to use that for now and revisit in 1.5? |
Where are we on this? Is it backlog for a future milestone vs |
Rt-without-renewal/EpiAware/src/models.jl
Line 10 in f7bf1c6
Let's think about this prior a bit in a new issue. It's a reasonable prior, but I think the PC prior using the inverse square root is usually recommended? This is on 1/k so it's close but not exactly the same. But sampling from that prior has also behaved pretty poorly for me.....
@seabbs -- I know you have thoughts here.
Originally posted by @zsusswein in #28 (comment)
The text was updated successfully, but these errors were encountered: