-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mean function question #12
Comments
Hi Jordan, thanks for your interest. The case of mean=0 is a special case of our model. If that is the correct process, then the optimization should just estimate beta0=0 for each component process (and if there is no gradient across the spatial domain it should also estimate beta1=0). Since we are passing them through an exponential function, a mean zero GP is really a geometric mean 1 "log-GP". Since this is being combined with nonnegative weights, I didn't want to make any assumptions on what the correct mean value was. Also, I think having a mean function can improve numerical stability (although I didn't test this), because even though a zero-mean GP can approximate any function, it will want to pull the function back toward zero and if the data are all far from zero, it will cause some friction between the prior and the likelihood that can cause numerical problems. If you don't like the mean function, it would probably not be too hard to modify the code to make it so those parameters are just fixed to zero rather than estimated. |
Hi Dr. Townes, thanks for your insight and prompt response.. Had to take a moment to digest and get back to you. That makes a lot of sense. Would you say then that the mean function is much more necessary for the poisson likelihood, where the underlying data is likely to globally deviate from a geometric mean = 1 after nonnegativity constraint (exp in this case)? Whereas, if the data were normalized, centered, and scaled in preparation of using the normal likelihood, then GP with mean = 0 is suitable, but the mean function could still serve a purpose of modeling global spatial variation? Also I think I see that when using the normal likelihood you just center and not do library size normalization and scaling. If that's the case, why is that? Really appreciate your feedback on this. Best, Jordan |
I think if you apply centering/scaling/etc you don't need the intercept term "beta0" but you could possibly still benefit from the "slope" term which would handle any broad gradients and allow the nonlinear GP part to focus on localized variation. It would probably still work without the slope term though since the GP is pretty flexible. I'm pretty sure the normal likelihood expects the "data" to be already normalized (in one of the preprocessing functions it sets layer=None to use the default layer (normalized counts) of the anndata object). It applies centering but the feature means are stored to be able to make predictions on the original scale. I don't think we did any scaling since there are parameters sigma^2_j (one for each feature) to absorb that source of variation. |
Hello,
Great paper, really interesting work.
I'm curious about the purpose of the mean function for the GP prior on the factors. I've seen several other factor models specify mean = 0 for the GP; do you mind sharing your thoughts on the rationale behind the mean function on the coordinates and how that plays into optimization & sampling from the GP after optimization? Or anything else I may not be thinking of..
Thanks for your time!
Jordan
The text was updated successfully, but these errors were encountered: