Skip to content

Commit

Permalink
deploy: fa16aa9
Browse files Browse the repository at this point in the history
  • Loading branch information
punkduckable committed Oct 29, 2024
1 parent 361b63d commit d0aa2d7
Show file tree
Hide file tree
Showing 19 changed files with 160 additions and 637 deletions.
Binary file modified .doctrees/autoapi/lasdi/gp/index.doctree
Binary file not shown.
Binary file modified .doctrees/autoapi/lasdi/inputs/index.doctree
Binary file not shown.
Binary file modified .doctrees/autoapi/lasdi/param/index.doctree
Binary file not shown.
Binary file modified .doctrees/autoapi/lasdi/timing/index.doctree
Binary file not shown.
Binary file modified .doctrees/environment.pickle
Binary file not shown.
Binary file modified .doctrees/index.doctree
Binary file not shown.
69 changes: 10 additions & 59 deletions _sources/autoapi/lasdi/gp/index.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -17,76 +17,27 @@ Functions
Module Contents
---------------

.. py:function:: fit_gps(X: numpy.ndarray, Y: numpy.ndarray) -> list[sklearn.gaussian_process.GaussianProcessRegressor]
Trains a GP for each column of Y. If Y has shape N x k, then we train k GP regressors. In this
case, we assume that X has shape N x M. Thus, the Input to the GP is in \mathbb{R}^M. For each
k, we train a GP where the i'th row of X is the input and the i,k component of Y is the
corresponding target. Thus, we return a list of k GP Regressor objects, the k'th one of which
makes predictions for the k'th coefficient in the latent dynamics.
.. py:function:: fit_gps(X, Y)
Trains each GP given the interpolation dataset.
X: (n_train, n_param) numpy 2d array
Y: (n_train, n_coef) numpy 2d array
We assume each target coefficient is independent with each other.
gp_dictionnary is a dataset containing the trained GPs (as sklearn objects)



-----------------------------------------------------------------------------------------------
:Parameters: * **X** (*A 2d numpy array of shape (n_train, input_dim), where n_train is the number of training*)
* **examples and input_dim is the number of components in each input (e.g., the number of**
* **parameters)**
* **Y** (*A 2d numpy array of shape (n_train, n_coef), where n_train is the number of training*)
* **examples and n_coef is the number of coefficients in the latent dynamics.**

-----------------------------------------------------------------------------------------------
:returns: * *A list of trained GP regressor objects. If Y has k columns, then the returned list has k*
* *elements. It's i'th element holds a trained GP regressor object whose training inputs are the*
* *columns of X and whose corresponding target values are the elements of the i'th column of Y.*


.. py:function:: eval_gp(gp_list: list[sklearn.gaussian_process.GaussianProcessRegressor], param_grid: numpy.ndarray) -> tuple
.. py:function:: eval_gp(gp_dictionnary, param_grid)
Computes the GPs predictive mean and standard deviation for points of the parameter space grid



-----------------------------------------------------------------------------------------------
:Parameters: * **gp_list** (*a list of trained GP regressor objects. The number of elements in this list should*)
* **match the number of columns in param_grid. The i'th element of this list is a GP regressor**
* **object that predicts the i'th coefficient.**
* **param_grid** (*A 2d numpy.ndarray object of shape (number of parameter combination, number of*)
* **parameters). The i,j element of this array specifies the value of the j'th parameter in the**
* **i'th combination of parameters. We use this as the testing set for the GP evaluation.**

-----------------------------------------------------------------------------------------------
:returns: * *A two element tuple. Both are 2d numpy arrays of shape (number of parameter combinations,*
* *number of coefficients). The two arrays hold the predicted means and std's for each parameter*
* *at each training example, respectively.*
* *Thus, the i,j element of the first return variable holds the predicted mean of the j'th*
* *coefficient in the latent dynamics at the i'th training example. Likewise, the i,j element of*
* *the second return variable holds the standard deviation in the predicted distribution for the*
* *j'th coefficient in the latent dynamics at the i'th combination of parameter values.*


.. py:function:: sample_coefs(gp_list: list[sklearn.gaussian_process.GaussianProcessRegressor], param: numpy.ndarray, n_samples: int)
Generates sets of ODE (SINDy) coefficients sampled from the predictive distribution for those
coefficients at the specified parameter value (parma). Specifically, for the k'th SINDy
coefficient, we draw n_samples samples of the predictive distribution for the k'th coefficient
when param is the parameter.


.. py:function:: sample_coefs(gp_dictionnary, param, n_samples)
-----------------------------------------------------------------------------------------------
:Parameters: * **gp_list** (*a list of trained GP regressor objects. The number of elements in this list should*)
* **match the number of columns in param_grid. The i'th element of this list is a GP regressor**
* **object that predicts the i'th coefficient.**
* **param** (*A combination of parameter values. i.e., a single test example. We evaluate each GP in*)
* **the gp_list at this parameter value (getting a prediction for each coefficient).**
* **n_samples** (*Number of samples of the predicted latent dynamics used to build ensemble of fom*)
* **predictions. N_s in the paper.**
Generates sample sets of ODEs for one given parameter.
coef_samples is a list of length n_samples, where each terms is a matrix of SINDy coefficients sampled from the GP predictive
distributions

-----------------------------------------------------------------------------------------------
:returns: * *A 2d numpy ndarray object called coef_samples. It has shape (n_samples, n_coef), where n_coef*
* *is the number of coefficients (length of gp_list). The i,j element of this list is the i'th*
* *sample of the j'th SINDy coefficient.*


61 changes: 18 additions & 43 deletions _sources/autoapi/lasdi/inputs/index.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -20,71 +20,46 @@ Classes
lasdi.inputs.InputParser


Functions
---------

.. autoapisummary::

lasdi.inputs.getDictFromList


Module Contents
---------------

.. py:data:: verbose
:type: bool
:value: False


.. py:class:: InputParser(dict: InputParser.__init__.dict, name: str = '')
A InputParser objects acts as a wrapper around a dictionary of settings. Thus, each setting is
a key and the corresponding value is the setting's value. Because one setting may itself be
a dictionary (we often group settings; each group has a name but several constituent settings),
the underlying dictionary is structured as a sequence of nested dictionaries. This class allows
the user to select a specific setting from that structure by specifying (via a list of strings)
where in that nested structure the desired setting lives.

.. py:class:: InputParser(dict, name='')
.. py:attribute:: dict_
:type: dict
:value: None



.. py:attribute:: name
:type: str
:value: ''



.. py:method:: getInput(keys: list, fallback=None, datatype=None)
A InputParser object acts as a wrapper around a dictionary of settings. That is, self.dict_
is structured as a nested family of dictionaries. Each setting corresponds to a key in
self.dict_. The setting's value is the corresponding value in self.dict_. In many cases,
a particular setting may be nested within others. That is, a setting's value may itself be
another dictionary housing various sub-settings. This function allows us to fetch a
specific setting from this nested structure.

Specifically, we specify a list of strings. keys[0] should be a key in self.dict_
If so, we set val = self.dict_[keys[0]]. If there are more keys, then val should be a
dictionary and keys[1] should be a key in this dictionary. In this case, we replace val
with val[key[1]] and so on. This continues until we have exhausted all keys. There is one
important exception:

If at some point in the process, there are more keys but val is not a dictionary, or if
there are more keys and val is a dictionary but the next key is not a key in that
dictionary, then we return the fallback value. If the fallback value does not exist,
returns an error.
.. py:method:: getInput(keys, fallback=None, datatype=None)
Find the value corresponding to the list of keys.
If the specified keys do not exist, use the fallback value.
If the fallback value does not exist, returns an error.
If the datatype is specified, enforce the output value has the right datatype.


-------------------------------------------------------------------------------------------
:Parameters: * **keys** (*A list of keys we want to fetch from self.dict. keys[0] should be a key in self.dict_*)
* **If so, we set val = self.dict_[keys[0]]. If there are more keys, then val should be a**
* **dictionary and keys[1] should be a key in this dictionary. In this case, we replace val**
* **with val[key[1]] and so on. This continues until we have exhausted all keys.**
* **fallback** (*A sort of default value. If at some point, val is not a dictionary (and there are*)
* **more keys) or val is a dictionary but the next key is not a valid key in that dictionary,**
* **then we return the fallback value.**
* **datatype** (*If not None, then we require that the final val has this datatype. If the final*)
* **val does not have the desired datatype, we raise an exception.**

-------------------------------------------------------------------------------------------
:rtype: The final val value as outlined by the process described above.
.. py:function:: getDictFromList(list_, inputDict)
get a dict with {key: val} from a list of dicts
NOTE: it returns only the first item in the list,
even if the list has more than one dict with {key: val}.


Loading

0 comments on commit d0aa2d7

Please sign in to comment.