diff --git a/README.md b/README.md index 97d7342c..2028ca49 100644 --- a/README.md +++ b/README.md @@ -14,26 +14,40 @@ ![GitHub Workflow Status (with event)](https://img.shields.io/github/actions/workflow/status/lnccbrown/HSSM/run_slow_tests.yml) ![GitHub Repo stars](https://img.shields.io/github/stars/lnccbrown/HSSM) [![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff) -[![codecov](https://codecov.io/gh/lnccbrown/HSSM/branch/main/graph/badge.svg)](https://codecov.io/gh/lnccbrown/HSSM) - ### Overview -HSSM is a Python toolbox that provides a seamless combination of state-of-the-art likelihood approximation methods with the wider ecosystem of probabilistic programming languages. It facilitates flexible hierarchical model building and inference via modern MCMC samplers. HSSM is user-friendly and provides the ability to rigorously estimate the impact of neural and other trial-by-trial covariates through parameter-wise mixed-effects models for a large variety of cognitive process models. HSSM is a BRAINSTORM project in collaboration with the Center for Computation and Visualization and the Center for Computational Brain Science within the Carney Institute at Brown University. - -- Allows approximate hierarchical Bayesian inference via various likelihood approximators. -- Estimate impact of neural and other trial-by-trial covariates via native hierarchical mixed-regression support. +HSSM is a Python toolbox that provides a seamless combination of +state-of-the-art likelihood approximation methods with the wider ecosystem of +probabilistic programming languages. It facilitates flexible hierarchical model +building and inference via modern MCMC samplers. HSSM is user-friendly and +provides the ability to rigorously estimate the impact of neural and other +trial-by-trial covariates through parameter-wise mixed-effects models for a +large variety of cognitive process models. HSSM is a +BRAINSTORM project in +collaboration with the Center for Computation and Visualization and the Center +for Computational Brain Science within the Carney Institute at Brown University. + +- Allows approximate hierarchical Bayesian inference via various likelihood + approximators. +- Estimate impact of neural and other trial-by-trial covariates via native + hierarchical mixed-regression support. - Extensible for users to add novel models with corresponding likelihoods. - Built on PyMC with support from the Python Bayesian ecosystem at large. -- Incorporates Bambi's intuitive `lmer`-like regression parameter specification for within- and between-subject effects. -- Native ArviZ support for plotting and other convenience functions to aid the Bayesian workflow. -- Utilizes the ONNX format for translation of differentiable likelihood approximators across backends. +- Incorporates Bambi's intuitive `lmer`-like regression parameter specification + for within- and between-subject effects. +- Native ArviZ support for plotting and other convenience functions to aid the + Bayesian workflow. +- Utilizes the ONNX format for translation of differentiable likelihood + approximators across backends. ### [Official documentation](https://lnccbrown.github.io/HSSM/). ## Cite HSSM -Fengler, A., Xu, P., Bera, K., Omar, A., Frank, M.J. (in preparation). HSSM: A generalized toolbox for hierarchical bayesian estimation of computational models in cognitive neuroscience. +Fengler, A., Xu, Y., Bera, K., Omar, A., Frank, M.J. (in preparation). HSSM: A +generalized toolbox for hierarchical bayesian estimation of computational models +in cognitive neuroscience. ## Example @@ -43,7 +57,7 @@ Here is a simple example of how to use HSSM: import hssm # Load a package-supplied dataset -cav_data = hssm.load_data('cavanagh_theta') +cav_data = hssm.load_data("cavanagh_theta") # Define a basic hierarchical model with trial-level covariates model = hssm.HSSM( @@ -66,12 +80,18 @@ model = hssm.HSSM( model.sample() ``` -To quickly get started with HSSM, please follow [this tutorial](https://lnccbrown.github.io/HSSM/getting_started/getting_started/). -For a deeper dive into HSSM, please follow [our main tutorial](https://lnccbrown.github.io/HSSM/tutorials/main_tutorial/). +To quickly get started with HSSM, please follow +[this tutorial](https://lnccbrown.github.io/HSSM/getting_started/getting_started/). +For a deeper dive into HSSM, please follow +[our main tutorial](https://lnccbrown.github.io/HSSM/tutorials/main_tutorial/). ## Installation -HSSM can be directly installed into your conda environment on Linux and MacOS. Installing HSSM on windows takes only one more simple step. We have a more detailed [installation guide](https://lnccbrown.github.io/HSSM/getting_started/installation/) for users with more specific setups. +HSSM can be directly installed into your conda environment on Linux and MacOS. +Installing HSSM on windows takes only one more simple step. We have a more +detailed +[installation guide](https://lnccbrown.github.io/HSSM/getting_started/installation/) +for users with more specific setups. ### Install HSSM on Linux and MacOS (CPU only) @@ -83,7 +103,8 @@ conda install -c conda-forge hssm ### Install HSSM on Linux and MacOS (with GPU Support) -If you need to sample with GPU, please install JAX with GPU support before installing HSSM: +If you need to sample with GPU, please install JAX with GPU support before +installing HSSM: ```bash conda install jaxlib=*=*cuda* jax cuda-nvcc -c conda-forge -c nvidia @@ -92,7 +113,9 @@ conda install -c conda-forge hssm ### Install HSSM on Windows (CPU only) -Because dependencies such as `jaxlib` and `numpyro` are not up-to-date on Conda, the easiest way to install HSSM on Windows is to install PyMC first and install HSSM via `pip`: +Because dependencies such as `jaxlib` and `numpyro` are not up-to-date on Conda, +the easiest way to install HSSM on Windows is to install PyMC first and install +HSSM via `pip`: ```bash conda install -c conda-forge pymc @@ -110,20 +133,27 @@ pip install hssm[cuda12] ### Support for Apple Silicon, AMD, and other GPUs -JAX also has support other GPUs. Please follow the [Official JAX installation guide](https://jax.readthedocs.io/en/latest/installation.html) to install the correct version of JAX before installing HSSM. - +JAX also has support other GPUs. Please follow the +[Official JAX installation guide](https://jax.readthedocs.io/en/latest/installation.html) +to install the correct version of JAX before installing HSSM. ## Advanced Installation ### Install HSSM directly with Pip -HSSM is also available through PyPI. You can directly install it with pip into any virtual environment via: +HSSM is also available through PyPI. You can directly install it with pip into +any virtual environment via: ```bash pip install hssm ``` -**Note:** While this installation is much simpler, you might encounter this warning message `WARNING (pytensor.tensor.blas): Using NumPy C-API based implementation for BLAS functions.` Please refer to our [advanced installation guide](https://lnccbrown.github.io/HSSM/getting_started/installation/) for more details. +**Note:** While this installation is much simpler, you might encounter this +warning message +`WARNING (pytensor.tensor.blas): Using NumPy C-API based implementation for BLAS functions.` +Please refer to our +[advanced installation guide](https://lnccbrown.github.io/HSSM/getting_started/installation/) +for more details. ### Install the dev version of HSSM @@ -135,7 +165,9 @@ pip install git+https://github.com/lnccbrown/HSSM.git ### Install HSSM on Google Colab -Google Colab comes with PyMC and JAX pre-configured. That holds true even if you are using the GPU and TPU backend, so you simply need to install HSSM via pip on Colab regardless of the backend you are using: +Google Colab comes with PyMC and JAX pre-configured. That holds true even if you +are using the GPU and TPU backend, so you simply need to install HSSM via pip on +Colab regardless of the backend you are using: ```bash !pip install hssm @@ -143,32 +175,43 @@ Google Colab comes with PyMC and JAX pre-configured. That holds true even if you ## Troubleshooting -**Note:** Possible solutions to any issues with installations with hssm can be located -[here](https://github.com/lnccbrown/HSSM/discussions). Also feel free to start a new -discussion thread if you don't find answers there. We recommend installing HSSM into -a new conda environment with Python 3.10 or 3.11 to prevent any problems with dependencies -during the installation process. Please note that hssm is only tested for python 3.10, -3.11. As of HSSM v0.2.0, support for Python 3.9 is dropped. Use unsupported python -versions with caution. +**Note:** Possible solutions to any issues with installations with hssm can be +located [here](https://github.com/lnccbrown/HSSM/discussions). Also feel free to +start a new discussion thread if you don't find answers there. We recommend +installing HSSM into a new conda environment with Python 3.10 or 3.11 to prevent +any problems with dependencies during the installation process. Please note that +hssm is only tested for python 3.10, 3.11. As of HSSM v0.2.0, support for Python +3.9 is dropped. Use unsupported python versions with caution. ## License -HSSM is licensed under [Copyright 2023, Brown University, Providence, RI](LICENSE) +HSSM is licensed under +[Copyright 2023, Brown University, Providence, RI](LICENSE) ## Support -For questions, please feel free to [open a discussion](https://github.com/lnccbrown/HSSM/discussions). +For questions, please feel free to +[open a discussion](https://github.com/lnccbrown/HSSM/discussions). -For bug reports and feature requests, please feel free to [open an issue](https://github.com/lnccbrown/HSSM/issues) using the corresponding template. +For bug reports and feature requests, please feel free to +[open an issue](https://github.com/lnccbrown/HSSM/issues) using the +corresponding template. ## Contribution -If you want to contribute to this project, please follow our [contribution guidelines](docs/CONTRIBUTING.md). +If you want to contribute to this project, please follow our +[contribution guidelines](docs/CONTRIBUTING.md). ## Acknowledgements -We would like to extend our gratitude to the following individuals for their valuable contributions to the development of the HSSM package: +We would like to extend our gratitude to the following individuals for their +valuable contributions to the development of the HSSM package: -- [Bambi](https://github.com/bambinos/bambi) - A special thanks to the Bambi project for providing inspiration, guidance, and support throughout the development process. [Tomás Capretto](https://github.com/tomicapretto), a key contributor to Bambi, provided invaluable assistance in the development of the HSSM package. +- [Bambi](https://github.com/bambinos/bambi) - A special thanks to the Bambi + project for providing inspiration, guidance, and support throughout the + development process. [Tomás Capretto](https://github.com/tomicapretto), a key + contributor to Bambi, provided invaluable assistance in the development of the + HSSM package. -Those contributions have greatly enhanced the functionality and quality of the HSSM. +Those contributions have greatly enhanced the functionality and quality of the +HSSM. diff --git a/docs/api/distribution_utils.md b/docs/api/distribution_utils.md index 231b5554..f7a72ac2 100644 --- a/docs/api/distribution_utils.md +++ b/docs/api/distribution_utils.md @@ -1 +1,19 @@ +The `hssm.distribution_utils` contains useful functions for building `pm.Distribution` +classes. Other than the `download_hf` function that downloads ONNX models shared on our +[huggingface model repository](https://huggingface.co/franklab/HSSM/tree/main), you will +generally not have to use these functions. For advanced users who want to build their +own PyMC models, they can use these functions to create `pm.Distribution` and +`RandomVariable` classes that they desire. + ::: hssm.distribution_utils + handler: python + options: + members: + - download_hf + - make_distribution + - make_ssm_rv + - make_family + - make_likelihood_callable + - make_missing_data_callable + - make_blackbox_op + - assemble_callables diff --git a/docs/api/hssm.md b/docs/api/hssm.md index 8373750e..00872a28 100644 --- a/docs/api/hssm.md +++ b/docs/api/hssm.md @@ -1 +1,27 @@ -::: hssm +Use `hssm.HSSM` class to construct an HSSM model. + +::: hssm.HSSM + handler: python + options: + show_root_heading: true + show_signature_annotations: true + show_object_full_path: false + show_signature: true # Make sure this is true + docstring_options: + ignore_init_summary: false + members: + - traces + - pymc_model + - sample + - sample_posterior_predictive + - sample_prior_predictive + - vi + - find_MAP + - log_likelihood + - summary + - plot_trace + - graph + - plot_posterior_predictive + - plot_quantile_probability + - restore_traces + - initial_point diff --git a/docs/api/likelihoods.md b/docs/api/likelihoods.md index ceee148d..e1e9efa1 100644 --- a/docs/api/likelihoods.md +++ b/docs/api/likelihoods.md @@ -1 +1,7 @@ +The `hssm.likelihoods` submodule exports a few likelihood functions. These functions +are already used in the model building process for some supported models, such as `ddm`, +`ddm_sdv`, and `full_ddm`, so you typically would not have to deal with them. However, +they can be helpful if you want to use them to build a model yourself in PyMC. Please +checkout the [this tutorial](../tutorials/pymc.ipynb) for more details. + :::hssm.likelihoods diff --git a/docs/api/link.md b/docs/api/link.md new file mode 100644 index 00000000..c39a5a34 --- /dev/null +++ b/docs/api/link.md @@ -0,0 +1 @@ +::: hssm.Link diff --git a/docs/api/load_data.md b/docs/api/load_data.md new file mode 100644 index 00000000..4502d55d --- /dev/null +++ b/docs/api/load_data.md @@ -0,0 +1 @@ +::: hssm.load_data diff --git a/docs/api/model_config.md b/docs/api/model_config.md new file mode 100644 index 00000000..03a1fe1c --- /dev/null +++ b/docs/api/model_config.md @@ -0,0 +1 @@ +::: hssm.ModelConfig diff --git a/docs/api/param.md b/docs/api/param.md new file mode 100644 index 00000000..926c8e00 --- /dev/null +++ b/docs/api/param.md @@ -0,0 +1 @@ +::: hssm.Param diff --git a/docs/api/plotting.md b/docs/api/plotting.md index 2fc4f20c..8bc839dc 100644 --- a/docs/api/plotting.md +++ b/docs/api/plotting.md @@ -1 +1,8 @@ +The `hssm.plotting` module provide functionalities to create HSSM-specific plots such +as the posterior predictive plots. Please checkout +[the plotting tutorial](../tutorials/plotting.ipynb) for more examples on how to use +these functions. Note that each plotting function has an equivalent in the +[`hssm.HSSM`](hssm.md) class, so you can call these functions from a built model without +having to import these functions. + :::hssm.plotting diff --git a/docs/api/prior.md b/docs/api/prior.md new file mode 100644 index 00000000..29490d6c --- /dev/null +++ b/docs/api/prior.md @@ -0,0 +1 @@ +::: hssm.Prior diff --git a/docs/api/set_floatx.md b/docs/api/set_floatx.md new file mode 100644 index 00000000..3bc10d53 --- /dev/null +++ b/docs/api/set_floatx.md @@ -0,0 +1 @@ +::: hssm.set_floatX diff --git a/docs/api/show_defaults.md b/docs/api/show_defaults.md new file mode 100644 index 00000000..c081a6d9 --- /dev/null +++ b/docs/api/show_defaults.md @@ -0,0 +1 @@ +::: hssm.show_defaults diff --git a/docs/api/simulate_data.md b/docs/api/simulate_data.md new file mode 100644 index 00000000..107b70b7 --- /dev/null +++ b/docs/api/simulate_data.md @@ -0,0 +1 @@ +::: hssm.simulate_data diff --git a/docs/getting_started/getting_started.ipynb b/docs/getting_started/getting_started.ipynb index ba285ad9..2376b080 100644 --- a/docs/getting_started/getting_started.ipynb +++ b/docs/getting_started/getting_started.ipynb @@ -71,35 +71,6 @@ "%config InlineBackend.figure_format='retina'" ] }, - { - "cell_type": "markdown", - "id": "0d4cd6cd-1169-45e3-b35c-ef773bc5cad3", - "metadata": {}, - "source": [ - "### Setting the global float type\n", - "\n", - "**Note**: Using the analytical DDM (Drift Diffusion Model) likelihood in PyMC without setting the float type in `PyTensor` may result in warning messages during sampling, which is a known bug in PyMC v5.6.0 and earlier versions. To avoid these warnings, we provide a convenience function:" - ] - }, - { - "cell_type": "code", - "execution_count": 3, - "id": "1f29da18-bbaa-4d40-bce1-9ef3f42496ed", - "metadata": {}, - "outputs": [ - { - "name": "stdout", - "output_type": "stream", - "text": [ - "Setting PyTensor floatX type to float32.\n", - "Setting \"jax_enable_x64\" to False. If this is not intended, please set `jax` to False.\n" - ] - } - ], - "source": [ - "hssm.set_floatX(\"float32\")" - ] - }, { "cell_type": "markdown", "id": "e8e41269-4c1e-4ec5-b2eb-82c35ee860c2", @@ -112,7 +83,7 @@ }, { "cell_type": "code", - "execution_count": 4, + "execution_count": 3, "id": "99bd95f4-5775-4142-8d6c-a01bfab7c88e", "metadata": { "tags": [] @@ -146,28 +117,28 @@ "
\n", "<xarray.Dataset>\n", + "<xarray.Dataset> Size: 136kB\n", "Dimensions: (chain: 4, draw: 1000)\n", "Coordinates:\n", - " * chain (chain) int64 0 1 2 3\n", - " * draw (draw) int64 0 1 2 3 4 5 6 7 8 ... 992 993 994 995 996 997 998 999\n", + " * chain (chain) int64 32B 0 1 2 3\n", + " * draw (draw) int64 8kB 0 1 2 3 4 5 6 7 ... 993 994 995 996 997 998 999\n", "Data variables:\n", - " v (chain, draw) float32 0.4881 0.5779 0.61 ... 0.6006 0.5699 0.6188\n", - " z (chain, draw) float32 0.501 0.4659 0.4684 ... 0.4648 0.4799 0.4705\n", - " t (chain, draw) float32 0.538 0.5098 0.5079 ... 0.5072 0.5275 0.5301\n", - " a (chain, draw) float32 1.489 1.487 1.508 1.484 ... 1.461 1.468 1.482\n", + " t (chain, draw) float64 32kB 0.4979 0.5185 0.5018 ... 0.5408 0.526\n", + " z (chain, draw) float64 32kB 0.4621 0.4855 0.5092 ... 0.5023 0.5135\n", + " a (chain, draw) float64 32kB 1.459 1.469 1.514 ... 1.458 1.466 1.504\n", + " v (chain, draw) float64 32kB 0.5754 0.61 0.5467 ... 0.5515 0.4841\n", "Attributes:\n", - " created_at: 2023-10-20T16:01:05.808666\n", - " arviz_version: 0.14.0\n", + " created_at: 2024-11-06T17:11:39.415458+00:00\n", + " arviz_version: 0.19.0\n", " inference_library: pymc\n", - " inference_library_version: 5.9.0\n", - " sampling_time: 13.109592199325562\n", + " inference_library_version: 5.16.2\n", + " sampling_time: 15.875946998596191\n", " tuning_steps: 1000\n", " modeling_interface: bambi\n", - " modeling_interface_version: 0.12.0
<xarray.Dataset>\n", - "Dimensions: (chain: 4, draw: 1000)\n", + "<xarray.Dataset> Size: 32MB\n", + "Dimensions: (chain: 4, draw: 1000, __obs__: 1000)\n", "Coordinates:\n", - " * chain (chain) int64 0 1 2 3\n", - " * draw (draw) int64 0 1 2 3 4 5 ... 994 995 996 997 998 999\n", - "Data variables: (12/17)\n", - " max_energy_error (chain, draw) float64 -0.3492 -0.3537 ... 0.4069\n", - " reached_max_treedepth (chain, draw) bool False False False ... False False\n", - " tree_depth (chain, draw) int64 3 3 2 3 2 3 3 3 ... 3 3 3 3 3 3 2\n", - " step_size_bar (chain, draw) float64 0.6181 0.6181 ... 0.652 0.652\n", - " acceptance_rate (chain, draw) float64 0.9678 0.9922 ... 0.4169 0.7194\n", - " step_size (chain, draw) float64 0.4296 0.4296 ... 0.6385 0.6385\n", - " ... ...\n", - " energy_error (chain, draw) float64 0.031 -0.3537 ... 0.3076\n", - " smallest_eigval (chain, draw) float64 nan nan nan nan ... nan nan nan\n", - " energy (chain, draw) float64 2.011e+03 ... 2.009e+03\n", - " lp (chain, draw) float64 -2.009e+03 ... -2.009e+03\n", - " largest_eigval (chain, draw) float64 nan nan nan nan ... nan nan nan\n", - " index_in_trajectory (chain, draw) int64 4 -5 3 -6 -3 -4 ... 2 -2 4 2 2 -2\n", + " * chain (chain) int64 32B 0 1 2 3\n", + " * draw (draw) int64 8kB 0 1 2 3 4 5 6 ... 993 994 995 996 997 998 999\n", + " * __obs__ (__obs__) int64 8kB 0 1 2 3 4 5 6 ... 994 995 996 997 998 999\n", + "Data variables:\n", + " rt,response (chain, draw, __obs__) float64 32MB -3.619 -1.255 ... -1.291\n", "Attributes:\n", - " created_at: 2023-10-20T16:01:05.815190\n", - " arviz_version: 0.14.0\n", - " inference_library: pymc\n", - " inference_library_version: 5.9.0\n", - " sampling_time: 13.109592199325562\n", - " tuning_steps: 1000\n", " modeling_interface: bambi\n", - " modeling_interface_version: 0.12.0
<xarray.Dataset>\n", - "Dimensions: (rt,response_obs: 1000, rt,response_extra_dim_0: 2)\n", + "<xarray.Dataset> Size: 496kB\n", + "Dimensions: (chain: 4, draw: 1000)\n", "Coordinates:\n", - " * rt,response_obs (rt,response_obs) int64 0 1 2 3 ... 996 997 998 999\n", - " * rt,response_extra_dim_0 (rt,response_extra_dim_0) int64 0 1\n", - "Data variables:\n", - " rt,response (rt,response_obs, rt,response_extra_dim_0) float32 ...\n", + " * chain (chain) int64 32B 0 1 2 3\n", + " * draw (draw) int64 8kB 0 1 2 3 4 5 ... 995 996 997 998 999\n", + "Data variables: (12/17)\n", + " acceptance_rate (chain, draw) float64 32kB 0.8185 0.7497 ... 0.8449\n", + " diverging (chain, draw) bool 4kB False False ... False False\n", + " energy (chain, draw) float64 32kB 1.994e+03 ... 1.991e+03\n", + " energy_error (chain, draw) float64 32kB 0.3886 ... 0.2065\n", + " index_in_trajectory (chain, draw) int64 32kB 1 -1 -5 1 3 ... 2 -2 2 1 -2\n", + " largest_eigval (chain, draw) float64 32kB nan nan nan ... nan nan\n", + " ... ...\n", + " process_time_diff (chain, draw) float64 32kB 0.008005 ... 0.003905\n", + " reached_max_treedepth (chain, draw) bool 4kB False False ... False False\n", + " smallest_eigval (chain, draw) float64 32kB nan nan nan ... nan nan\n", + " step_size (chain, draw) float64 32kB 0.7032 0.7032 ... 0.7085\n", + " step_size_bar (chain, draw) float64 32kB 0.6125 0.6125 ... 0.633\n", + " tree_depth (chain, draw) int64 32kB 3 2 3 3 3 2 ... 3 2 3 2 2 2\n", "Attributes:\n", - " created_at: 2023-10-20T16:01:05.817200\n", - " arviz_version: 0.14.0\n", + " created_at: 2024-11-06T17:11:39.423413+00:00\n", + " arviz_version: 0.19.0\n", " inference_library: pymc\n", - " inference_library_version: 5.9.0\n", + " inference_library_version: 5.16.2\n", + " sampling_time: 15.875946998596191\n", + " tuning_steps: 1000\n", " modeling_interface: bambi\n", - " modeling_interface_version: 0.12.0
<xarray.Dataset> Size: 24kB\n", + "Dimensions: (__obs__: 1000, rt,response_extra_dim_0: 2)\n", + "Coordinates:\n", + " * __obs__ (__obs__) int64 8kB 0 1 2 3 4 ... 996 997 998 999\n", + " * rt,response_extra_dim_0 (rt,response_extra_dim_0) int64 16B 0 1\n", + "Data variables:\n", + " rt,response (__obs__, rt,response_extra_dim_0) float64 16kB ...\n", + "Attributes:\n", + " created_at: 2024-11-06T17:11:39.425992+00:00\n", + " arviz_version: 0.19.0\n", + " inference_library: pymc\n", + " inference_library_version: 5.16.2\n", + " modeling_interface: bambi\n", + " modeling_interface_version: 0.14.0
v | \n", - "0.567 | \n", - "0.036 | \n", - "0.498 | \n", - "0.632 | \n", - "0.001 | \n", + "t | \n", + "0.522 | \n", + "0.020 | \n", + "0.483 | \n", + "0.560 | \n", + "0.000 | \n", "0.0 | \n", - "2731.0 | \n", - "2657.0 | \n", + "2311.0 | \n", + "2524.0 | \n", "1.0 | \n", "
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
z | \n", - "0.482 | \n", - "0.014 | \n", - "0.456 | \n", - "0.509 | \n", + "0.494 | \n", + "0.013 | \n", + "0.468 | \n", + "0.517 | \n", "0.000 | \n", "0.0 | \n", - "2470.0 | \n", - "2508.0 | \n", + "1966.0 | \n", + "2394.0 | \n", "1.0 | \n", "||
t | \n", - "0.509 | \n", - "0.024 | \n", - "0.465 | \n", - "0.555 | \n", - "0.000 | \n", + "a | \n", + "1.490 | \n", + "0.028 | \n", + "1.437 | \n", + "1.540 | \n", + "0.001 | \n", "0.0 | \n", - "2484.0 | \n", - "2613.0 | \n", + "2424.0 | \n", + "2857.0 | \n", "1.0 | \n", "
a | \n", - "1.497 | \n", - "0.029 | \n", - "1.446 | \n", - "1.555 | \n", + "v | \n", + "0.556 | \n", + "0.033 | \n", + "0.497 | \n", + "0.620 | \n", "0.001 | \n", "0.0 | \n", - "2593.0 | \n", - "2738.0 | \n", + "2322.0 | \n", + "2703.0 | \n", "1.0 | \n", "
\n", + " | mean | \n", + "sd | \n", + "hdi_3% | \n", + "hdi_97% | \n", + "mcse_mean | \n", + "mcse_sd | \n", + "ess_bulk | \n", + "ess_tail | \n", + "r_hat | \n", + "
---|---|---|---|---|---|---|---|---|---|
z | \n", + "0.489 | \n", + "0.013 | \n", + "0.465 | \n", + "0.512 | \n", + "0.000 | \n", + "0.000 | \n", + "1900.0 | \n", + "2064.0 | \n", + "1.0 | \n", + "
theta | \n", + "0.309 | \n", + "0.030 | \n", + "0.248 | \n", + "0.362 | \n", + "0.001 | \n", + "0.001 | \n", + "1582.0 | \n", + "1373.0 | \n", + "1.0 | \n", + "
v | \n", + "0.538 | \n", + "0.044 | \n", + "0.457 | \n", + "0.621 | \n", + "0.001 | \n", + "0.001 | \n", + "2017.0 | \n", + "2221.0 | \n", + "1.0 | \n", + "
a | \n", + "1.480 | \n", + "0.062 | \n", + "1.358 | \n", + "1.592 | \n", + "0.002 | \n", + "0.001 | \n", + "1449.0 | \n", + "1396.0 | \n", + "1.0 | \n", + "
t | \n", + "0.523 | \n", + "0.027 | \n", + "0.472 | \n", + "0.573 | \n", + "0.001 | \n", + "0.000 | \n", + "1583.0 | \n", + "1959.0 | \n", + "1.0 | \n", + "
\n", - " | mean | \n", - "sd | \n", - "hdi_3% | \n", - "hdi_97% | \n", - "mcse_mean | \n", - "mcse_sd | \n", - "ess_bulk | \n", - "ess_tail | \n", - "r_hat | \n", - "
---|---|---|---|---|---|---|---|---|---|
t | \n", - "0.483 | \n", - "0.029 | \n", - "0.432 | \n", - "0.540 | \n", - "0.001 | \n", - "0.001 | \n", - "1222.0 | \n", - "1584.0 | \n", - "1.0 | \n", - "
z | \n", - "0.500 | \n", - "0.014 | \n", - "0.474 | \n", - "0.525 | \n", - "0.000 | \n", - "0.000 | \n", - "1608.0 | \n", - "1710.0 | \n", - "1.0 | \n", - "
theta | \n", - "0.332 | \n", - "0.032 | \n", - "0.272 | \n", - "0.392 | \n", - "0.001 | \n", - "0.001 | \n", - "1277.0 | \n", - "1878.0 | \n", - "1.0 | \n", - "
a | \n", - "1.535 | \n", - "0.066 | \n", - "1.406 | \n", - "1.650 | \n", - "0.002 | \n", - "0.001 | \n", - "1198.0 | \n", - "1702.0 | \n", - "1.0 | \n", - "
v | \n", - "0.548 | \n", - "0.046 | \n", - "0.464 | \n", - "0.634 | \n", - "0.001 | \n", - "0.001 | \n", - "1845.0 | \n", - "1830.0 | \n", - "1.0 | \n", - "