-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Gaussian Process as a model #53
Comments
Adding notes from our 2021-12-07 meeting here: A first stab at implementing the Gaussian process model is at #60. At the moment, we haven't done any tuning or optimization of the model. If we are going to eventually compare it to FSL eddy, that will need to be done. For testing, some suggestions are to use:
Will eventually need to add better parallelization. Currently, the DTI model chunks the data before fitting the model. We can also implement that code here. For the future, we may need to dig into dipy and can use joblib/dask to further improve the parallelization. See also comments: https://github.com/nipreps/eddymotion/pull/60/files#r764144040 |
@josephmje - Not sure if you or anyone else are still working on the GP model, but figured I'd chime in with a few thoughts to get the conversation going again: The GP model needs at least a PoC design for the kernel before it can be tuned. Here's a first template for what that might look like. High-Level Requirements: The way that most folks have seemed to address this in the context of q-space is to factorize the covariance function into (a) an angular part and (b) a radial part, with the addition of (c) an error term. The angular part can be constructed using Legendre polynomials, whereby symmetry about the origin can be achieved by excluding odd terms and ensuring that their coefficients are positive. Past work seems to have used an order <=6, with the consensus being that grabbing the first four terms is best. For the radial part, we could start with sklearn's standard RBF kernel and see how far it gets us. My biggest concern would be performance... and then the final kernel would would look something like the following, with ~6 hyperparams that'd still need to be tuned:
The bulk of the work on this would be the initial round of tuning done during a pre-training stage, and using a synthetic, fiber-fox generated sample as you originally had proposed. The objective that we'd want to minimize is the negative log marginal likelihood, and the data itself would be synthesized such that training and validation sets have similar amounts of noise / order of magnitude but perhaps different numbers of directions compared to evaluation data. A ton of work is involved in this obviously, but figured it couldn't hurt to discuss more. WDYT? |
In order to compare baseline performance vs. FSL eddy.
The text was updated successfully, but these errors were encountered: