-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge pull request #95 from TimMonko/i2k-workshop-docs
I2k workshop docs - add downloads and workflow tutorial
- Loading branch information
Showing
25 changed files
with
148 additions
and
10 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -82,3 +82,4 @@ venv/ | |
|
||
# written by setuptools_scm | ||
**/_version.py | ||
/docs/tutorial/DownloadArchive |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Empty file.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,25 @@ | ||
# Easy Machine Learning | ||
|
||
The goal of this tutorial is to get a user familiar with generating annotations, workflows, and machine learning classifiers. Unlike the [Example Pipeline Tutorial](01_example_pipeline.md), this tutorial just provides raw images and hints on how to progress. | ||
|
||
If you investigate the `primaryneurons` images you'll notice that there variable interesting morphologies that are not easy to segment by traditional intensity based segmentation. Machine Learning fills this gap (you'll see!) that Deep Learning has yet to sort out. | ||
|
||
You might also be surprised when looking at some of the images that I would not recommend traditional intensity-based segmentation methods for NCOA4 and Ferritin (but would, and do, use it for DAPI). Instead, I would endorse using Machine Learning based segmentation because it is less sensitive to intensity (which is expected to be different between neurons and treatment group) and more sensitive to 'Features' of the images, which includes intensity, size, blobness, ridgeness, edgeness, etc. | ||
|
||
The skills practiced in this will be used on relatively small, 2D images; however, things are intended to generally transfer to both 3D and higher dimensional datasets. | ||
|
||
## Sparse annotation with Image Utilities | ||
|
||
One strength of `napari-ndev` is the ability to quickly annotate images and save them, while maintaining helpful metadata to pair the images up for future processing. In `napari` annotations can be made using `labels` or `shapes`. Shapes has a current weakness in that it cannot save `images`, so `napari-ndev` converts shapes to `labels` so that they match the image format. For this tutorial, we want to use the `labels` feature to 'draw' on annotations. | ||
|
||
1. Load in one of the primary neuron images in `ExtractedScenes` using the `Image Utilities` widget. | ||
2. Add a Labels layer by clicking the `tag` icon (the third button above the layer list) | ||
3. Click the `Paintbrush` button in the `layer controls`. | ||
4. Click and drag on the image to draw annotations. | ||
5. Draw background labels with the original | ||
|
||
## Generation of a Feature Set with APOC Widget | ||
|
||
## Training a Machine Learning Classifier | ||
|
||
## Predicting with a Machine Learning Classifier |
Empty file.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,77 @@ | ||
# Build your own Workflow | ||
|
||
The goal of this tutorial is to get a user familiar with generating ROI annotations and building your own workflows. Unlike the [Example Pipeline Tutorial](01_example_pipeline.md), this tutorial just provides raw images and hints on how to progress. | ||
|
||
For this workflow, we will be using the `neuralprogenitors` images. Our goal is to segment the PAX6 and TBR2 channels. We also specifically want to make an ROI that is 200 microns wide on each image, and bin a specific region of the brand (the specifics beyond the scope and necessity of this tutorial). Later, we will use these labels to count only the ones inside the region of interest. | ||
|
||
The skills practiced in this tutorial will be used on relatively small, 2D images; however, things are intended to generally transfer to both 3D and higher dimensional datasets. | ||
|
||
![neural-progenitor-goal](screenshots/neuralprogenitor-goal.png){ width=50% } | ||
|
||
## Annotating regions of interest with Image Utilities | ||
|
||
1. Load in one of the neural progenitor images from `ConcatenatedImages` using the `Image Utilities` widget. | ||
2. Navigate in the toolbar to `View` -> `Scale Bar` -> `Scale Bar Visible`. Now there should be a scale bar in the bottom right | ||
3. Add a Shapes layer by clicking the `polygon` icon (the second button above the layer list) | ||
4. Click the `Rectangle` button in the `layer controls`. | ||
5. Click and drag on the image to draw a rectangle that has a 200um width. | ||
6. Select button number 5 (highlighted in blue in the screenshot) to select the shape. | ||
7. Move the shape by dragging | ||
8. Rotate the shape into an area of interest. | ||
9. Finally, with the `Shapes` layer highlighted. Click the `Save Selected Layers` button in the `Image Utilities Widget` | ||
|
||
![Shape](screenshots/neuralprogenitor-shape.png) | ||
|
||
## Using the napari-assistant to generate a workflow | ||
|
||
1. Open the `napari-assistant` by navigating in the toolbar to `Plugins` -> `Assistant (napari-assistant)` | ||
2. Select the image you want to process. | ||
3. Play around with the assistant buttons that seem interesting. Play around! They are sort of logically ordered left to right, top to bottom. The label layer I have in the image is *not* quality segmentation. Check the goal image above. | ||
4. You can modify parameters and functions on the fly, including in previously used functions by clicking on that specific layer. | ||
5. If you need help reaching the goal (of quality segmentation of the nuclei), try out some of the hints. | ||
6. When you are satisfied with what the workflow. Click the `Save and load ...` button -> `Export workflow to file` and save the .yaml file produced. | ||
|
||
![napari-assistant](screenshots/neuralprogenitor-assistant.png) | ||
|
||
### Hints | ||
|
||
??? tip "How to label" | ||
|
||
You may find the functions in the `Label` button to be quite useful. | ||
|
||
??? tip "A very useful label function" | ||
|
||
Check out the [voronoi_otsu_labeling](https://haesleinhuepf.github.io/BioImageAnalysisNotebooks/20_image_segmentation/11_voronoi_otsu_labeling.html) function. Read the link for more info. | ||
|
||
??? tip "Pre-processing the images to reduce background" | ||
|
||
Try playing with functions in `remove noise` and `remove background` to remove some of the variability in background intensity and off-target fluorescence prior to labeling. This will make labeling more consister. | ||
|
||
??? tip "Cleaning up the labels" | ||
|
||
Perhaps you have criteria for what labels you want to keep. Check out `Process Labels` button for cleaning up things like small or large labels, or labels on the edges. | ||
|
||
??? tip "OK, I give up, just give me the answer" | ||
|
||
Something like the following should work well. | ||
|
||
1. median_sphere (pyclesperanto) with radii of 1 | ||
2. top_hat_sphere (pyclesperanto) with radii of 10 (roughly the diameter of the objects) | ||
3. voronoi_otsu_label (pyclesperanto) with spot and outline sigmas of 1 | ||
4. exclude_small_labels (pyclesperanto) that are smaller than 10 pixels | ||
|
||
## Applying your workflow in batch with the Workflow Widget | ||
|
||
Consider the instructions for [Using the Workflow Widget for Batch Processing](01_example_pipeline.md#using-the-workflow-widget-for-batch-processing) and apply it to this workflow. | ||
|
||
## Measuring your batch workflow output | ||
|
||
In additional to how we already learned how to use the `Measure Widget`, we can also consider additional creative possibility. In this case, we want to only count cells in our region of interest (the shape rectangle that was drawn), so we want to load this in as a `Region Directory`. Then, we want to ensure that the `Shape` is added as an `Intensity Image` and that we measure the `intensity_max` or `intensity_min`. The maximum intensity of an object *if it touches the region of interest at any point* will be 1. The minimum intensity of an object *fully* inside the ROI will be 1, since *all* pixels are inside the ROI. So, you can choose how you want to consider objects relative to the ROI. | ||
|
||
Then, when grouping the data, use the `intensity_max/min_Shape` as a grouping variable! Then, all labels with a value of 1 or 0 will be counted separately. This can be extended to multiple regions of interest, because each shape has it's own value (not immediately obvious yet in napari). We have used this to label multiple brain regions consistently in whole brain section analyses. | ||
|
||
**Future addition:** The ability to simply filter objects in the Measure Widget. This can for example be used to exclude all labels that are outside the region of interest (having a intensity value of 0 relative to the ROI), instead of having to group. | ||
|
||
## Notes on multi-dimensional data | ||
|
||
Overall, most of the plugin should be able to handle datasets that have time, multi-channel, and 3D data. Try exploring the `Lund Timelapse (100MB)` sample data from `Pyclesperanto` in napari. |
Binary file not shown.
Binary file not shown.
Binary file added
BIN
+12.7 MB
.../tutorial/neuralprogenitors/ConcatenatedImages/E18.5 EdU16.5 9bp Homo Em3 S1_01 PAX6.tiff
Binary file not shown.
Binary file added
BIN
+12.5 MB
.../tutorial/neuralprogenitors/ConcatenatedImages/E18.5 EdU16.5 9bp Homo Em3 V1_01 PAX6.tiff
Binary file not shown.
Binary file added
BIN
+13.2 MB
docs/tutorial/neuralprogenitors/ConcatenatedImages/E18.5 EdU16.5 9bp WT Em8 S1_02 PAX6.tiff
Binary file not shown.
Binary file added
BIN
+12.6 MB
docs/tutorial/neuralprogenitors/ConcatenatedImages/E18.5 EdU16.5 9bp WT Em8 V1_02 PAX6.tiff
Binary file not shown.
Binary file added
BIN
+10.3 KB
docs/tutorial/neuralprogenitors/Shapes/E18.5 EdU16.5 9bp Homo Em3 S1_01 PAX6.tiff
Binary file not shown.
Binary file not shown.
Binary file added
BIN
+30 MB
...actedScenes/2024-08-07 25x 48HIC NCOA4 647 FT 568 PHALL 488 DAPI OBL_scene_P5-D9.ome.tiff
Binary file not shown.
Binary file added
BIN
+36.4 MB
...actedScenes/2024-08-07 25x 96HIC NCOA4 647 FT 568 PHALL 488 DAPI OBL_scene_P7-D9.ome.tiff
Binary file not shown.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters