diff --git a/tutorials/03_density_estimators.ipynb b/tutorials/03_density_estimators.ipynb index d1c575db1..a8b16a2d1 100644 --- a/tutorials/03_density_estimators.ipynb +++ b/tutorials/03_density_estimators.ipynb @@ -25,7 +25,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "One option is to use one of set of preconfigured density estimators by passing a string in the `density_estimator` keyword argument to the inference object (`NPE` or `NLE`), e.g., \"maf\" to use a Masked Autoregressive Flow, of \"nsf\" to use a Neural Spline Flow with default hyperparameters.\n" + "One option is using one of the preconfigured density estimators by passing a string in the `density_estimator` keyword argument to the inference object (`NPE` or `NLE`), e.g., \"maf\" for a Masked Autoregressive Flow, of \"nsf\" for a Neural Spline Flow with default hyperparameters.\n" ] }, { @@ -101,9 +101,9 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "It is also possible to pass an `embedding_net` to `posterior_nn()` which learn summary\n", - "statistics from high-dimensional simulation outputs. You can find a more detailed\n", - "tutorial on this in [04_embedding_networks](04_embedding_networks.md).\n" + "It is also possible to pass an `embedding_net` to `posterior_nn()` to automatically\n", + "learn summary statistics from high-dimensional simulation outputs. You can find a more\n", + "detailed tutorial on this in [04_embedding_networks](04_embedding_networks.md).\n" ] }, { diff --git a/tutorials/08_crafting_summary_statistics.ipynb b/tutorials/08_crafting_summary_statistics.ipynb index 13806f4f6..b7775aaa8 100644 --- a/tutorials/08_crafting_summary_statistics.ipynb +++ b/tutorials/08_crafting_summary_statistics.ipynb @@ -12,7 +12,7 @@ "metadata": {}, "source": [ "Many simulators produce outputs that are high-dimesional. For example, a simulator might\n", - "generate a time series or an image. In the tutorial on [04_embedding_networks](04_embedding_networks.md), we discussed how a\n", + "generate a time series or an image. In the tutorial [04_embedding_networks](04_embedding_networks.md), we discussed how a\n", "neural networks can be used to learn summary statistics from such data. In this\n", "notebook, we will instead focus on hand-crafting summary statistics. We demonstrate that\n", "the choice of summary statistics can be crucial for the performance of the inference\n", diff --git a/tutorials/12_iid_data_and_permutation_invariant_embeddings.ipynb b/tutorials/12_iid_data_and_permutation_invariant_embeddings.ipynb index 3b3d8cb25..3b4eccaf6 100644 --- a/tutorials/12_iid_data_and_permutation_invariant_embeddings.ipynb +++ b/tutorials/12_iid_data_and_permutation_invariant_embeddings.ipynb @@ -325,7 +325,9 @@ "source": [ "## IID inference with NPE using permutation-invariant embedding nets\n", "\n", - "For NPE we need to define an embedding net that handles the set-like structure of iid-data, i.e., that it permutation invariant and can handle different number of trials.\n", + "For NPE we need to define an embedding net that handles the set-like structure of\n", + "iid-data, i.e., a permutation invariant networks that can handle different numbers and\n", + "orderings of trials.\n", "\n", "We implemented several embedding net classes that allow to construct such a permutation- and number-of-trials invariant embedding net.\n", "\n", diff --git a/tutorials/15_importance_sampled_posteriors.ipynb b/tutorials/15_importance_sampled_posteriors.ipynb index c4275ac6c..f18afde41 100644 --- a/tutorials/15_importance_sampled_posteriors.ipynb +++ b/tutorials/15_importance_sampled_posteriors.ipynb @@ -58,11 +58,9 @@ "from torch import ones, eye\n", "import torch\n", "from torch.distributions import MultivariateNormal\n", - "import matplotlib.pyplot as plt\n", "\n", "from sbi.inference import NPE, ImportanceSamplingPosterior\n", "from sbi.utils import BoxUniform\n", - "from sbi.inference.potentials.base_potential import BasePotential\n", "from sbi.analysis import marginal_plot" ] }, @@ -71,7 +69,7 @@ "id": "4808d6d3-cb14-4ecd-a0f5-824bf54a5b01", "metadata": {}, "source": [ - "We first define a simulator and a prior which both have functions for sampling (as required for SBI) and log_prob evaluations (as required for importance sampling)." + "We first define a simulator and a prior which both have a `sample` function (as required for `sbi`) and `log_prob` evaluations (as required for importance sampling)." ] }, { @@ -202,9 +200,16 @@ "id": "7150cc13-9911-4656-936f-9cafb867eba4", "metadata": {}, "source": [ - "With the SBI toolbox, importance sampling is a one-liner. SBI supports two methods for importance sampling:\n", - "- `\"importance\"`: returns `n_samples` weighted samples (as above) corresponding to `n_samples * sample_efficiency` samples from the posterior. This results in unbiased samples, but the number of effective samples may be small when the SBI estimate is inaccurate.\n", - "- `\"sir\"` (sampling-importance-resampling): performs rejection sampling on a batched basis with batch size `oversampling_factor`. This is a guaranteed way to obtain `N / oversampling_factor` samples, but these may be biased as the weight normalization is not performed across the entire set of samples." + "With the `sbi` toolbox, importance sampling is a one-liner. `sbi` supports two methods\n", + "for importance sampling:\n", + "- `\"importance\"`: returns `n_samples` weighted samples (as above) corresponding to\n", + " `n_samples * sample_efficiency` samples from the posterior. This results in unbiased\n", + " samples, but the number of effective samples may be small when the `sbi` estimate is\n", + " inaccurate.\n", + "- `\"sir\"` (sampling-importance-resampling): performs rejection sampling on a batched\n", + " basis with batch size `oversampling_factor`. This is a guaranteed way to obtain `N /\n", + " oversampling_factor` samples, but these may be biased as the weight normalization is\n", + " not performed across the entire set of samples." ] }, {