Skip to content

Commit

Permalink
Merge pull request #95 from TimMonko/i2k-workshop-docs
Browse files Browse the repository at this point in the history
I2k workshop docs - add downloads and workflow tutorial
  • Loading branch information
TimMonko authored Oct 28, 2024
2 parents d3e65e5 + 92866a6 commit 01cbf07
Show file tree
Hide file tree
Showing 25 changed files with 148 additions and 10 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -82,3 +82,4 @@ venv/

# written by setuptools_scm
**/_version.py
/docs/tutorial/DownloadArchive
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ A collection of widgets intended to serve any person seeking to process microsco

### See the [poster presented at BINA 2024](https://timmonko.github.io/napari-ndev/BINA_poster/) for an overview of the plugins in action!

### Try out the [Virtual I2K 2024 Workshop](https://timmonko.github.io/napari-ndev/tutorial/00_setup/) for an interactive tutorial to learn more!
### Try out the [Virtual I2K 2024 Workshop](https://timmonko.github.io/napari-ndev/tutorial/00_setup/) for an interactive tutorial!

## Installation

Expand Down
40 changes: 37 additions & 3 deletions docs/tutorial/00_setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,14 @@ If you are familiar with python, then I would recommend creating a new environme

## Download Tutorial Images and Files

TBD: incoming download link
[[Download Link to be Inserted]]

### CellPainting Images
## CellPainting Images

The images come from the [Broad Bioimage Benchmark Collection](https://bbbc.broadinstitute.org/BBBC022/). Investigate the link for the description of the images.

Scale: 0.656um/pixel

Channels:

1. Hoescht 33342 (nuclei)
Expand All @@ -22,4 +24,36 @@ Channels:
4. WGA + phalloidin (plasma membrane, golgi, and actin)
5. MitoTracker Deep Red (mitochondria)

Scale: 0.656um/pixel
![cellpainting-image](screenshots/cellpainting-image.png)

## PrimaryNeuron Images

These images come from my own work at the University of Minnesota in the Thomas Bastian lab. The primary neurons are derived from embryonic mouse brains, and grown for a few days in a dish. The goal is to study morphology and iron homeostasis as the neurons develop over time in conditions of iron deficiency. The images available in the tutorial are extracted from multi-scene CZI files (each original file has over 100 scenes) using the `Image Utilities` widget. Metadata from the CZI files was correct, so the widget automatically passes this downstream without any user input.

Scale: 0.1241um/pixel

Channels:

1. AF647 - NCOA4 / nuclear coactivator 4 (a protein known to target ferritin for degradation)
2. AF568 - Ferritin (the iron storage protein)
3. AF488 - Phalloidin (stains actin filaments)
4. DAPI (nuclei)
5. Oblique (brightfield; not always present, which is ok)

![primary-neuron-image](screenshots/primaryneuron-image.png)

## NeuralProgenitor Images

These images come from the Zhe Chen lab at the University of Minnesota. These come from a microscope that *very poorly* saves the images: the images are forced to be saved as RGB (dspite having only one channel in each image) and improper scaling metadata. The images available in this tutorial have already been concatenated and the metadata applied using the `Image Utilities`.

Pax6 - Green; Tbr2 - Magenta

Scale: 0.7548um/pixel

Channels:

1. PAX6 (a nuclear transcription factor identifying radial glia)
2. PAX6-2 (a duplicate of PAX6, due to the way the microscope saves images)
3. TBR2 (a nuclear transcription factor identifying intermediate progenitor cells)

![neuralprogenitor](screenshots/neuralprogenitor-image.png)
4 changes: 2 additions & 2 deletions docs/tutorial/01_example_pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

The goal of this example pipeline is to get the user familiar with working with `napari-ndev` for batch processing and reproducibility (view `Image Utilities` and `Workflow Widget`). In addition, this example pipeline thoroughly explains the `Measure Widget`, since this is a shared use across many pipelines.

This Example Pipeline does not cover how `napari-ndev` is used for high-throughput annotations, the machine learning tools (`APOC Widget`), and designing your own workflows. This information will iinstead be covered in the interactive [Basic Usage Tutorial](02_basic_usage.md).
This Example Pipeline does not cover how `napari-ndev` is used for high-throughput annotations, the machine learning tools (`APOC Widget`), and designing your own workflows. This information will instead be covered in the interactive tutorials that follow.

## Image Utilities

Expand All @@ -29,7 +29,7 @@ Now, investigate your concatenated images. Go to `Select Files` and find the fol

## Example workflow

Once images are in a format that is helpful for analysis, we can proceed with other widgets. This does mean that some images do not need to be processed with the `Image Utilities` Widget; for example, some microscopes properly incorporate scale and channel names into the image metadata. For this tutorial, we are going to use the `Workflow Widget` to pre-process, segment, and label features of the image with a pre-made custom workflow file (see `cellpainting\scripting_workflow.ipynb` to see how). The intent of the `Workflow Widget` is to *easily* reproduce This custom workflow was designed initially with the `napari-assistant` which will be explored further in the [Basic Usage](02_basic_usage.md) tutorial section.
Once images are in a format that is helpful for analysis, we can proceed with other widgets. This does mean that some images do not need to be processed with the `Image Utilities` Widget; for example, some microscopes properly incorporate scale and channel names into the image metadata. For this tutorial, we are going to use the `Workflow Widget` to pre-process, segment, and label features of the image with a pre-made custom workflow file (see `cellpainting\scripting_workflow.ipynb` to see how). The intent of the `Workflow Widget` is to *easily* reproduce This custom workflow was designed initially with the `napari-assistant` which will be explored further in the following tutorial sections.

The goal for this workflow is to segment the nucleus, cell area (based on a voronoi tessellation of the nuclei), cytoplasm (cell area - nucleus), and the nucleoli. We will later measure the properties of these objects using the `Measure Widget`.

Expand Down
Empty file removed docs/tutorial/02_basic_usage.md
Empty file.
25 changes: 25 additions & 0 deletions docs/tutorial/02_easy_ML.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# Easy Machine Learning

The goal of this tutorial is to get a user familiar with generating annotations, workflows, and machine learning classifiers. Unlike the [Example Pipeline Tutorial](01_example_pipeline.md), this tutorial just provides raw images and hints on how to progress.

If you investigate the `primaryneurons` images you'll notice that there variable interesting morphologies that are not easy to segment by traditional intensity based segmentation. Machine Learning fills this gap (you'll see!) that Deep Learning has yet to sort out.

You might also be surprised when looking at some of the images that I would not recommend traditional intensity-based segmentation methods for NCOA4 and Ferritin (but would, and do, use it for DAPI). Instead, I would endorse using Machine Learning based segmentation because it is less sensitive to intensity (which is expected to be different between neurons and treatment group) and more sensitive to 'Features' of the images, which includes intensity, size, blobness, ridgeness, edgeness, etc.

The skills practiced in this will be used on relatively small, 2D images; however, things are intended to generally transfer to both 3D and higher dimensional datasets.

## Sparse annotation with Image Utilities

One strength of `napari-ndev` is the ability to quickly annotate images and save them, while maintaining helpful metadata to pair the images up for future processing. In `napari` annotations can be made using `labels` or `shapes`. Shapes has a current weakness in that it cannot save `images`, so `napari-ndev` converts shapes to `labels` so that they match the image format. For this tutorial, we want to use the `labels` feature to 'draw' on annotations.

1. Load in one of the primary neuron images in `ExtractedScenes` using the `Image Utilities` widget.
2. Add a Labels layer by clicking the `tag` icon (the third button above the layer list)
3. Click the `Paintbrush` button in the `layer controls`.
4. Click and drag on the image to draw annotations.
5. Draw background labels with the original

## Generation of a Feature Set with APOC Widget

## Training a Machine Learning Classifier

## Predicting with a Machine Learning Classifier
Empty file removed docs/tutorial/03_advanced_usage.md
Empty file.
77 changes: 77 additions & 0 deletions docs/tutorial/03_build_pipeline.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
# Build your own Workflow

The goal of this tutorial is to get a user familiar with generating ROI annotations and building your own workflows. Unlike the [Example Pipeline Tutorial](01_example_pipeline.md), this tutorial just provides raw images and hints on how to progress.

For this workflow, we will be using the `neuralprogenitors` images. Our goal is to segment the PAX6 and TBR2 channels. We also specifically want to make an ROI that is 200 microns wide on each image, and bin a specific region of the brand (the specifics beyond the scope and necessity of this tutorial). Later, we will use these labels to count only the ones inside the region of interest.

The skills practiced in this tutorial will be used on relatively small, 2D images; however, things are intended to generally transfer to both 3D and higher dimensional datasets.

![neural-progenitor-goal](screenshots/neuralprogenitor-goal.png){ width=50% }

## Annotating regions of interest with Image Utilities

1. Load in one of the neural progenitor images from `ConcatenatedImages` using the `Image Utilities` widget.
2. Navigate in the toolbar to `View` -> `Scale Bar` -> `Scale Bar Visible`. Now there should be a scale bar in the bottom right
3. Add a Shapes layer by clicking the `polygon` icon (the second button above the layer list)
4. Click the `Rectangle` button in the `layer controls`.
5. Click and drag on the image to draw a rectangle that has a 200um width.
6. Select button number 5 (highlighted in blue in the screenshot) to select the shape.
7. Move the shape by dragging
8. Rotate the shape into an area of interest.
9. Finally, with the `Shapes` layer highlighted. Click the `Save Selected Layers` button in the `Image Utilities Widget`

![Shape](screenshots/neuralprogenitor-shape.png)

## Using the napari-assistant to generate a workflow

1. Open the `napari-assistant` by navigating in the toolbar to `Plugins` -> `Assistant (napari-assistant)`
2. Select the image you want to process.
3. Play around with the assistant buttons that seem interesting. Play around! They are sort of logically ordered left to right, top to bottom. The label layer I have in the image is *not* quality segmentation. Check the goal image above.
4. You can modify parameters and functions on the fly, including in previously used functions by clicking on that specific layer.
5. If you need help reaching the goal (of quality segmentation of the nuclei), try out some of the hints.
6. When you are satisfied with what the workflow. Click the `Save and load ...` button -> `Export workflow to file` and save the .yaml file produced.

![napari-assistant](screenshots/neuralprogenitor-assistant.png)

### Hints

??? tip "How to label"

You may find the functions in the `Label` button to be quite useful.

??? tip "A very useful label function"

Check out the [voronoi_otsu_labeling](https://haesleinhuepf.github.io/BioImageAnalysisNotebooks/20_image_segmentation/11_voronoi_otsu_labeling.html) function. Read the link for more info.

??? tip "Pre-processing the images to reduce background"

Try playing with functions in `remove noise` and `remove background` to remove some of the variability in background intensity and off-target fluorescence prior to labeling. This will make labeling more consister.

??? tip "Cleaning up the labels"

Perhaps you have criteria for what labels you want to keep. Check out `Process Labels` button for cleaning up things like small or large labels, or labels on the edges.

??? tip "OK, I give up, just give me the answer"

Something like the following should work well.

1. median_sphere (pyclesperanto) with radii of 1
2. top_hat_sphere (pyclesperanto) with radii of 10 (roughly the diameter of the objects)
3. voronoi_otsu_label (pyclesperanto) with spot and outline sigmas of 1
4. exclude_small_labels (pyclesperanto) that are smaller than 10 pixels

## Applying your workflow in batch with the Workflow Widget

Consider the instructions for [Using the Workflow Widget for Batch Processing](01_example_pipeline.md#using-the-workflow-widget-for-batch-processing) and apply it to this workflow.

## Measuring your batch workflow output

In additional to how we already learned how to use the `Measure Widget`, we can also consider additional creative possibility. In this case, we want to only count cells in our region of interest (the shape rectangle that was drawn), so we want to load this in as a `Region Directory`. Then, we want to ensure that the `Shape` is added as an `Intensity Image` and that we measure the `intensity_max` or `intensity_min`. The maximum intensity of an object *if it touches the region of interest at any point* will be 1. The minimum intensity of an object *fully* inside the ROI will be 1, since *all* pixels are inside the ROI. So, you can choose how you want to consider objects relative to the ROI.

Then, when grouping the data, use the `intensity_max/min_Shape` as a grouping variable! Then, all labels with a value of 1 or 0 will be counted separately. This can be extended to multiple regions of interest, because each shape has it's own value (not immediately obvious yet in napari). We have used this to label multiple brain regions consistently in whole brain section analyses.

**Future addition:** The ability to simply filter objects in the Measure Widget. This can for example be used to exclude all labels that are outside the region of interest (having a intensity value of 0 relative to the ROI), instead of having to group.

## Notes on multi-dimensional data

Overall, most of the plugin should be able to handle datasets that have time, multi-channel, and 3D data. Try exploring the `Lund Timelapse (100MB)` sample data from `Pyclesperanto` in napari.
Binary file added docs/tutorial/cellpainting.zip
Binary file not shown.
Binary file added docs/tutorial/neuralprogenitors.zip
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file added docs/tutorial/primaryneurons.zip
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file added docs/tutorial/screenshots/cellpainting-image.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/tutorial/screenshots/primaryneuron-image.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
9 changes: 5 additions & 4 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -49,10 +49,10 @@ nav:
- Further Widget Info: widget_further_info.md
- Poster: BINA_poster.md
- I2K Workshop:
- 00 - Tutorial Setup: tutorial/00_setup.md
- 01 - Example Pipeline: tutorial/01_example_pipeline.md
# - Basic Usage: tutorial/02_basic_usage.md
# - Advanced Usage: tutorial/03_advanced_usage.md
- 1) Tutorial Setup: tutorial/00_setup.md
- 2) Example Pipeline: tutorial/01_example_pipeline.md
- 3) Easy Machine Learning: tutorial/02_easy_ML.md
- 4) Build Your Own Pipeline: tutorial/03_build_pipeline.md
- Examples:
- Image Utilities: examples/utilities/image_utilities.ipynb
- Workflow:
Expand Down Expand Up @@ -85,6 +85,7 @@ markdown_extensions:
- pymdownx.details
- pymdownx.highlight
- pymdownx.extra
- pymdownx.superfences
- attr_list
- md_in_html
- pymdownx.tabbed:
Expand Down

0 comments on commit 01cbf07

Please sign in to comment.