Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update links in tutorials #885

Merged
merged 1 commit into from
Nov 2, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions examples/00_HH_simulator.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Note, you find the original version of this notebook at [https://github.com/mackelab/sbi/blob/main/examples/00_HH_simulator.ipynb](https://github.com/mackelab/sbi/blob/main/examples/00_HH_simulator.ipynb) in the `sbi` repository."
"Note, you find the original version of this notebook at [https://github.com/sbi-dev/sbi/blob/main/examples/00_HH_simulator.ipynb](https://github.com/sbi-dev/sbi/blob/main/examples/00_HH_simulator.ipynb) in the `sbi` repository."
]
},
{
Expand Down Expand Up @@ -635,7 +635,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.1"
"version": "3.10.6"
}
},
"nbformat": 4,
Expand Down
2 changes: 1 addition & 1 deletion tutorials/00_getting_started.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Note, you can find the original version of this notebook at [https://github.com/mackelab/sbi/blob/main/tutorials/00_getting_started.ipynb](https://github.com/mackelab/sbi/blob/main/tutorials/00_getting_started.ipynb) in the `sbi` repository."
"Note, you can find the original version of this notebook at [https://github.com/sbi-dev/sbi/blob/main/tutorials/00_getting_started.ipynb](https://github.com/sbi-dev/sbi/blob/main/tutorials/00_getting_started.ipynb) in the `sbi` repository."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion tutorials/01_gaussian_amortized.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Note, you can find the original version of this notebook at [https://github.com/mackelab/sbi/blob/main/tutorials/01_gaussian_amortized.ipynb](https://github.com/mackelab/sbi/blob/main/tutorials/01_gaussian_amortized.ipynb) in the `sbi` repository."
"Note, you can find the original version of this notebook at [https://github.com/sbi-dev/sbi/blob/main/tutorials/01_gaussian_amortized.ipynb](https://github.com/sbi-dev/sbi/blob/main/tutorials/01_gaussian_amortized.ipynb) in the `sbi` repository."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion tutorials/02_flexible_interface.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Note, you can find the original version of this notebook at [https://github.com/mackelab/sbi/blob/main/tutorials/02_flexible_interface.ipynb](https://github.com/mackelab/sbi/blob/main/tutorials/02_flexible_interface.ipynb) in the `sbi` repository."
"Note, you can find the original version of this notebook at [https://github.com/sbi-dev/sbi/blob/main/tutorials/02_flexible_interface.ipynb](https://github.com/sbi-dev/sbi/blob/main/tutorials/02_flexible_interface.ipynb) in the `sbi` repository."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion tutorials/03_multiround_inference.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Note, you can find the original version of this notebook at [https://github.com/mackelab/sbi/blob/main/tutorials/03_multiround_inference.ipynb](https://github.com/mackelab/sbi/blob/main/tutorials/03_multiround_inference.ipynb) in the `sbi` repository."
"Note, you can find the original version of this notebook at [https://github.com/sbi-dev/sbi/blob/main/tutorials/03_multiround_inference.ipynb](https://github.com/sbi-dev/sbi/blob/main/tutorials/03_multiround_inference.ipynb) in the `sbi` repository."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion tutorials/04_density_estimators.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"It is also possible to pass an `embedding_net` to `posterior_nn()` which learn summary statistics from high-dimensional simulation outputs. You can find a more detailed tutorial on this [here](https://www.mackelab.org/sbi/tutorial/05_embedding_net/)."
"It is also possible to pass an `embedding_net` to `posterior_nn()` which learn summary statistics from high-dimensional simulation outputs. You can find a more detailed tutorial on this [here](https://sbi-dev.github.io/sbi/tutorial/05_embedding_net/)."
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions tutorials/05_embedding_net.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
"source": [
"# Learning summary statistics with a neural net\n",
"\n",
"When doing simulation-based inference, it is very important to use well-chosen summary statistics for describing the data generated by the simulator. Usually, these statistics take into account domain knowledge. For instance, in the [example of the Hodgkin-Huxley model](https://www.mackelab.org/sbi/examples/00_HH_simulator/), the summary statistics are defined by a [function](https://github.com/mackelab/sbi/blob/86d9b07238f5a0176638fecdd5622694d92f2962/examples/HH_helper_functions.py#L159) which takes a 120 ms recording as input (a 12000-dimensional input vector) and outputs a 7-dimensional feature vector containing different statistical descriptors of the recording (e.g., number of spikes, average value, etc.). \n",
"When doing simulation-based inference, it is very important to use well-chosen summary statistics for describing the data generated by the simulator. Usually, these statistics take into account domain knowledge. For instance, in the [example of the Hodgkin-Huxley model](https://sbi-dev.github.io/sbi/examples/00_HH_simulator/), the summary statistics are defined by a [function](https://github.com/sbi-dev/sbi/blob/86d9b07238f5a0176638fecdd5622694d92f2962/examples/HH_helper_functions.py#L159) which takes a 120 ms recording as input (a 12000-dimensional input vector) and outputs a 7-dimensional feature vector containing different statistical descriptors of the recording (e.g., number of spikes, average value, etc.). \n",
"\n",
"However, in other cases, it might be of interest to actually **learn from the data** which summary statistics to use, e.g., because the raw data is highly complex and domain knowledge is not available or not applicable. \n",
"\n",
Expand All @@ -21,7 +21,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Note, you can find the original version of this notebook at [https://github.com/mackelab/sbi/blob/main/tutorials/05_embedding_net.ipynb](https://github.com/mackelab/sbi/blob/main/tutorials/05_embedding_net.ipynb) in the `sbi` repository."
"Note, you can find the original version of this notebook at [https://github.com/sbi-dev/sbi/blob/main/tutorials/05_embedding_net.ipynb](https://github.com/sbi-dev/sbi/blob/main/tutorials/05_embedding_net.ipynb) in the `sbi` repository."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion tutorials/07_conditional_distributions.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Note, you can find the original version of this notebook at [https://github.com/mackelab/sbi/blob/main/tutorials/07_conditional_distributions.ipynb](https://github.com/mackelab/sbi/blob/main/tutorials/07_conditional_distributions.ipynb) in the `sbi` repository."
"Note, you can find the original version of this notebook at [https://github.com/sbi-dev/sbi/blob/main/tutorials/07_conditional_distributions.ipynb](https://github.com/sbi-dev/sbi/blob/main/tutorials/07_conditional_distributions.ipynb) in the `sbi` repository."
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion tutorials/10_crafting_summary_statistics.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Many simulators produce outputs that are high-dimesional. For example, a simulator might generate a time series or an image. In a [previous tutorial](https://www.mackelab.org/sbi/tutorial/05_embedding_net/), we discussed how a neural networks can be used to learn summary statistics from such data. In this notebook, we will instead focus on hand-crafting summary statistics. We demonstrate that the choice of summary statistics can be crucial for the performance of the inference algorithm.\n"
"Many simulators produce outputs that are high-dimesional. For example, a simulator might generate a time series or an image. In a [previous tutorial](https://sbi-dev.github.io/sbi/tutorial/05_embedding_net/), we discussed how a neural networks can be used to learn summary statistics from such data. In this notebook, we will instead focus on hand-crafting summary statistics. We demonstrate that the choice of summary statistics can be crucial for the performance of the inference algorithm.\n"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions tutorials/11_sampler_interface.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
"source": [
"# The sampler interface\n",
"\n",
"Note: this tutorial requires that the user is already familiar with the [flexible interface](https://www.mackelab.org/sbi/tutorial/02_flexible_interface/).\n",
"Note: this tutorial requires that the user is already familiar with the [flexible interface](https://sbi-dev.github.io/sbi/tutorial/02_flexible_interface/).\n",
"\n",
"`sbi` implements three methods: SNPE, SNLE, and SNRE. When using SNPE, the trained neural network directly approximates the posterior. Thus, sampling from the posterior can be done by sampling from the trained neural network. The neural networks trained in SNLE and SNRE approximate the likelihood(-ratio). Thus, in order to draw samples from the posterior, one has to perform additional sampling steps, e.g. Markov-chain Monte-Carlo (MCMC). In `sbi`, the implemented samplers are:\n",
"\n",
Expand All @@ -16,7 +16,7 @@
"\n",
"- Variational inference (VI)\n",
"\n",
"When using the flexible interface, the sampler as well as its attributes can be set with `sample_with=\"mcmc\"`, `mcmc_method=\"slice_np\"`, and `mcmc_parameters={}`. However, for full flexibility in customizing the sampler, we recommend using the **sampler interface**. This interface is described here. Further details can be found [here](https://github.com/mackelab/sbi/pull/573)."
"When using the flexible interface, the sampler as well as its attributes can be set with `sample_with=\"mcmc\"`, `mcmc_method=\"slice_np\"`, and `mcmc_parameters={}`. However, for full flexibility in customizing the sampler, we recommend using the **sampler interface**. This interface is described here. Further details can be found [here](https://github.com/sbi-dev/sbi/pull/573)."
]
},
{
Expand Down
16 changes: 1 addition & 15 deletions tutorials/17_SBI_for_models_of_decision_making.ipynb
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
{
"cells": [
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand All @@ -11,7 +10,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand All @@ -22,13 +20,12 @@
"This can induce a problem when performing trial-based SBI that relies on learning a neural likelihood: It is challenging for most density estimators to handle both, continuous and discrete data at the same time. \n",
"However, there is a recent SBI method for solving this problem, it's called __Mixed Neural Likelihood Estimation__ (MNLE). It works just like NLE, but with mixed data types. The trick is that it learns two separate density estimators, one for the discrete part of the data, and one for the continuous part, and combines the two to obtain the final neural likelihood. Crucially, the continuous density estimator is trained conditioned on the output of the discrete one, such that statistical dependencies between the discrete and continuous data (e.g., between choices and reaction times) are modeled as well. The interested reader is referred to the original paper available [here](https://elifesciences.org/articles/77220).\n",
"\n",
"MNLE was recently added to `sbi` (see this [PR](https://github.com/mackelab/sbi/pull/638) and also [issue](https://github.com/mackelab/sbi/issues/845)) and follow the same API as `SNLE`.\n",
"MNLE was recently added to `sbi` (see this [PR](https://github.com/mackelab/sbi/pull/638) and also [issue](https://github.com/mackelab/sbi/issues/845)) and follows the same API as `SNLE`.\n",
"\n",
"In this tutorial we will show how to apply `MNLE` to mixed data, and how to deal with varying experimental conditions. "
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -175,7 +172,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -244,7 +240,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -317,7 +312,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -368,7 +362,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand All @@ -378,7 +371,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -498,7 +490,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand All @@ -510,7 +501,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -569,7 +559,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -638,7 +627,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -675,7 +663,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down Expand Up @@ -771,7 +758,6 @@
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
Expand Down
Loading