-
Notifications
You must be signed in to change notification settings - Fork 1
/
index.qmd
1054 lines (692 loc) · 38.7 KB
/
index.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
---
format:
revealjs:
theme: default
logo: "images/JAX_logo_rgb_transparentback.png"
slide-number: true
footer: Imaging Applications, Research IT
transition: "slide"
highlight-style: arrow
chalkboard:
buttons: false
controls-layout: bottom-right
controls: true
from: markdown+emoji
---
## Introduction to QuPath
**Peter Sobolewski (he/him)**
Systems Analyst, Imaging Applications
Research IT
:::: {.columns style='display: flex'}
::: {.column width="50%" style='display: flex; justify-content: left; align-items: center;'}
### [https://qupath.github.io](https://qupath.github.io)
:::
::: {.column width="50%" style='display: flex; justify-content: center;'}
![](images/qupath_128.png){fig-alt="QuPath logo" width=250}
:::
::::
## Lesson Plan
- Brief introduction to QuPath
- Key concept: Projects
- QuPath GUI layout and toolbars
- Making & Managing Annotations
- Setting stain Vectors
- Pixel classification using a thresholder
## What is QuPath?
QuPath is an open-source image analysis program
<br/>
**Key feature**:
QuPath has a graphical user interface (`GUI`) for performant working with very large 2D images, like those produced by slide scanners
## A bit of background
- project was started at The Queen's University Belfast by Pete Bankhead
- first public release was in October 2016
- currently developed by [Pete Bankhead's group at The University of Edinburgh](https://www.ed.ac.uk/pathology/people/staff-students/peter-bankhead)
- \>7500 publications [using QuPath](https://scholar.google.com/scholar?start=0&q=qupath)
:::{.callout-important}
### If you use it, cite it!
[https://qupath.readthedocs.io/en/stable/docs/intro/citing.html](https://qupath.readthedocs.io/en/stable/docs/intro/citing.html)
:::
## Strengths of QuPath
- Was designed for very large, multiscale 2D files
- Includes extensive annotation, overlay, and visualization options
- It includes robust algorithms for common analysis tasks
- It includes *interactive* machine learning for pixel and object classification
- Recordable workflows allow for easy batch processing
- Integrated ImageJ
- Robust scripting support (Groovy)
## Limitations
- Limited *3D & time series viewing* capabilities:
- single planes only
- Some complex functions require scripting
- Extension ecosystem is relatively limited
- No support for zarr/NGFF
## Getting help
- In-app documentation: Help menu
- Online documentation: [https://qupath.readthedocs.io/en/stable/index.html](https://qupath.readthedocs.io/en/stable/index.html)
- Includes tutorials, scripting, advanced concepts
- YouTube Channel: [https://www.youtube.com/c/QuPath](https://www.youtube.com/c/QuPath)
- Image.sc Forums: [https://forum.image.sc/tag/qupath](https://forum.image.sc/tag/qupath)
# Key concept: QuPath `project`
- `projects` are the best way to organize your analysis work
- `projects` are frequently required by some scripts/extensions
- `projects` are folders:
- group together related images
- organize data, scripts, classifiers, etc.
## How create a project?
- `Create project` button
- File ‣ Project… ‣ Create project
:::{.callout-important}
### When making a project, remember to create a folder for it!
:::
- Drag-and-drop an **empty** folder
## Anatomy of a QuPath `project`
![](images/project_folder.png){fig-alt="Screenshot showing the contents of a QuPath project folder." fig-align="center"}
## How to add images to a project?
- Drag and drop
- File ‣ Project… ‣ Add images
:::{.callout-important}
#### image files will not be copied into your `project` folder!
- A link (`URI`) will be used!
- If you move the file(s) you will be prompted to update the link.
:::
:::aside
QuPath can handle large 2D images, e.g. whole-slide images—*there is no need to make tiles first*
:::
## QuPath will never edit your image files!
- `images` in a `project` are actually QuPath objects holding metadata, annotations, etc.
- The image data/pixels are accessed from the files via an `ImageServer`
- Duplicating/saving `images` within a `project` only relates to QuPath specific data, **not** the original files or pixels
:::{.callout-important}
### When images are removed from a project, the data files are not deleted!
:::
# Let's make our first QuPath project!
## Launch QuPath
![](images/qupath_welcome.png){fig-alt="Screenshot showing the QuPath welcome screen." fig-align="center"}
## Click `Create project`
![](images/create_project_1.png){fig-alt="Inset of screenshot showing the `create project` button in red." fig-align="center"}
## Make sure to make a new, empty folder
![](images/create_project_2.png){fig-alt="Screenshot showing the file browser with the `New Folder` button in red." fig-align="center"}
## Select the empty folder for the project
![](images/create_project_3.png){fig-alt="Screenshot showing the file browser with empty project folder." fig-align="center"}
## Add an image
![](images/add_image_1.png){fig-alt="Inset of screenshot showing the `Add images` button in red." fig-align="center"}
### Use drag-and-drop!
## Use `Choose files`—or drag-and-drop
![](images/add_image_2.png){fig-alt="Screenshot of `Import images to project` dialog with `Choose files` button in red." fig-align="center"}
## Click `Import`
![](images/add_image_3.png){fig-alt="Screenshot of `Import images to project` dialog." fig-align="center"}
### Defaults are OK for the majority of cases
## Set Image type
![](images/set_image_type.png){fig-alt="Screenshot of `Set image type` dialog." fig-align="center"}
:::{.callout-note}
### This affects the behavior of tools & features! (but you can change it)
:::
## :tada: QuPath project with an image :tada:
![](images/QuPath_main_window.png){fig-alt="Screenshot of QuPath application window with a single image in the `Analysis pane`." fig-align="center"}
# QuPath GUI layout
## Analysis Pane: Project tab
![](images/analysis_pane.png){fig-alt="Inset of screenshot of QuPath application window with a single image in the `Analysis pane` Project tab." fig-align="center"}
### Double-click an image to open it in the `viewer` on the right -- it will be bolded in the list
## Analysis Pane: Project tab
### Right-click on images to:
- Rename, remove or duplicate
- When you duplicate, can choose to also copy the QuPath object data files
- To add metadata or descriptions
- Blind/Mask samples: replace names with hashes
:::{.callout-important}
### These changes only affect the QuPath objects in this project, not your data files!
:::
## Analysis Pane: Image tab
![](images/analysis_pane_image.png){fig-alt="Inset of screenshot of QuPath application window `Analysis pane` with CMU-1.svs image open." fig-align="center"}
## Analysis Pane: Image tab
- Name and location of the image file
- Pixel width/height: scale calibration, **very important**
- can set yourself using a `Line` annotation
- Server type: how QuPath is reading the raw image file, set at import
- Pyramid: levels of downsampling for performance
- Image type: determines some default functions, e.g. channel information
## QuPath viewer
![](images/qupath_viewer.png){fig-alt="Screenshot of QuPath application window with elements of the viewer highlighted in red." fig-align="center"}
- Minimap: Image overview, allows for quick navigation
- Info bar: gives Pixel coordinate and value
- Right-click the viewer to close or make a grid
# QuPath toolbar
## QuPath toolbar
![](images/QuPath_toolbar.png){fig-alt="Screenshot of QuPath application window with the toolbar marked in red." fig-align="center"}
## QuPath toolbar: information
![](images/QuPath_info_tools.png){fig-alt="Screenshot of QuPath application window with the Preferences, Log, and Interactive Help tools marked in red." fig-align="center"}
## QuPath toolbar: information
![](images/toolbar_info_tools_inset.png){fig-alt="Inset of screenshot of QuPath application window with the Preferences, Log, and Interactive Help tools marked in red." fig-align="center"}
- Preferences: settings that customize QuPath behavior, including look-and-feel, extensions, etc.
- Log: message logger, showing Info, Warnings, and Errors—helpful when troubleshooting or asking for help
- `Dot` notifies of un-read Errors
- ***Interactive Help***: Provides contextual help based on state of application/viewer, what you mouse over, etc.
- `Dot` notifies for potential problems, will offer solutions
## QuPath toolbar: Annotation tools
![](images/QuPath_annotation_tools_1.png){fig-alt="Inset of screenshot of QuPath application window (top left) showing annotation tools in red." fig-align="center"}
- **+** : Move tool: pan around the image or move objects
- For the Polyline tool: double-click to end
- Brush: the size scales with zoom level
- Magic wand: context-aware brush that takes into account intensity & zoom
- `S`: Switch to selection mode, where tools *select* objects instead
## QuPath toolbar: brightness & contrast, zoom
![](images/toolbar_BC_zoom.png){fig-alt="Inset of screenshot of QuPath application window (top left) showing B&C and Zoom tools in red." fig-align="center"}
- Brightness & contrast dialog also allows for control of channels and colormaps
- Can manually specify a zoom level by double-click
- Zoom button resets zoom to fit whole image in viewer
## QuPath toolbar: show/hide & fill, opacity
![](images/toolbar_show_hide.png){fig-alt="Inset of screenshot of QuPath application window (top left) showing Annotation and Detection Show/Hide tools in red." fig-align="center"}
- Enable showing Annotations or Detections, plus whether they are filled
- `C`: show/hides pixel classification overlay
- Slider controls opacity, so you can see through to the data beneath
:::{.callout-tip}
### If you can't see things you expect, check that they aren't hidden!
:::
## QuPath toolbar: measurements table & scripting
![](images/toolbar_meas_script.png){fig-alt="Inset of screenshot of QuPath application window (top left) showing the Measurement Table and Script Editor tools in red." fig-align="center"}
- Measurement Table drop-down menu enables opening a table view of measurements associated with different object types
- The Script Editor is used for automation and advanced tasks
# QuPath GUI Pro-tip: Command list
:::{.callout-tip}
### For menu items, press `Command/Control-L` and start typing what you remember to get the menu item/command!
:::
![](images/command_list.png){fig-alt="Screenshot of QuPath `Command list` dialog, with the filter box marked in red." fig-align="center"}
# Making & Managing Annotations
## Annotation tool keybindings
- Tools are also accessible in the `Tools` menu, which shows the shortcuts
![](images/QuPath_tools_menu.png){fig-alt="Inset of screenshot of QuPath window (top left) showing annotation the Tools menu." fig-align="center"}
## Annotation tool preferences
![](images/Pref_drawing_tools.png){fig-alt="Inset of screenshot of QuPath Preferences dialog showing the settings for the drawing tools." fig-align="center"}
## Annotation tool tips and tricks
- Annotations are editable, unless you lock them (Right-click ‣ Lock)
- Move tool can be toggled with `Space bar`
- Double-click to *select* objects
- For Ellipse & Rectangle can also use Objects ‣ Annotations… ‣ Specify annotation
- To start fresh: Objects ‣ Delete… ‣ Delete all annotations
- Brush and Wand scale with zoom; they become more precise as you zoom in
## Annotation tool tips and tricks
- You can use the Brush or Wand tool to refine selected Annotations
- start from the *inside* of the annotation, to expand it
- Note: if you want to make an Annotation *inside* another Annotation, you need to first lock it.
- hold `Alt/Option` and start from the *outside* to reduce it
- Use `Shift` with the Brush or Wand to add to an Annotation (discontinuous)
## Annotations exercise 1
<br/>
### Take a few minutes to try out the different tools
## Analysis Pane: Annotation tab
![](images/manage_annotations.png){fig-alt="Screenshot of QuPath application showing the Annotations tab of the Analysis Pane in red." fig-align="center"}
- Annotation list lets you select, delete
- Right-click to **lock** or edit properties (name, color)
- You can shift or control click to multi-select
- Classification list lets you assign Annotations to `classes`
## Analysis Pane: Annotation tab: Classes
![](images/annotations_classes_submenu.png){fig-alt="Screenshot of QuPath application showing the Annotations tab of the Analysis Pane with the Classes submenu shown in red." fig-align="center"}
- Classes group objects in a more powerful way than names
- You can add your own classes or remove existing ones
- Note: `Populate` commands refer to the **class list**
:::{.callout-important}
### Majority of QuPath functions rely on using Classes
:::
## Analysis Pane: Annotations tab: Set and autoset
![](images/annotations_classes_set.png){fig-alt="Screenshot of QuPath application showing the Annotations tab of the Analysis Pane with the `Set selected` and `Auto set` buttons shown in red." fig-align="center"}
- `Set selected` assigns all selected Annotations to a selected class
- `Auto set` assigns all new Annotations (regardless of type) to the selected class
:::{.callout-important}
### Classes with a `*` are ignored in measurements, percentages, etc.
:::
## Annotations exercise 2
1. Add a `Tissue` class
2. Make Annotations for each of the tissue slices
3. Assign the 4 Annotations to the `Tissue` class
4. Name the 4 Annotations `slice 1` through `slice 4` (left to right)
:::{.callout-tip}
### First delete all annotations (Objects ‣ Delete…) or duplicate the image (uncheck `Also duplicate data`)
:::
## Analysis Pane: Annotation measurements
![](images/Annotation_measurements_1.png){fig-alt="Inset of screenshot of QuPath application showing the Annotations measurements pane of the Annotations tab of the Analysis Pane." fig-align="center"}
- Annotations have basic Measurements that depend on their type (length, area, perimeter, etc.)
## Annotation measurements: measurement tables
![](images/toolbar_meas_script.png){fig-alt="Inset of screenshot of QuPath application window (top left) showing the Measurement Table and Script Editor tools in red." fig-align="center"}
- You can generate a table using the `Show measurements table` Toolbar button
## Analysis Pane: Hierarchy tab
- Presents the relationship between objects:
- Parent ‣ Child
:::{.callout-important}
### For Annotations, the entire area of a child must be inside the parent
:::
- Objects ‣ Annotations… ‣ Resolve hierarchy
- QuPath will place **all** child objects with parent objects
- Objects ‣ Annotations… ‣ Insert into hierarchy
- QuPath will insert **the selected** objects into the hierarchy
## Annotations exercise 3
1. Make a `Border` class
2. Make a few `Line` Annotations per tissue slice to measure the thickness of the dark purple edge region
3. Assign the lines to the `Border` class
4. Use `Resolve hierarchy` to assign the lines to slices
5. Browse the `Measurements table`: right-click to show just the `Border` class
:::{.callout-tip}
### First `Lock` all of your `Tissue` Annotations to prevent accidental editing
:::
# Setting stain vectors
## Stain separation/deconvolution
- Aims to extract information about individual chromogenic stains from the RGB values of each pixel
- e.g. convert (R, G, B) to (H, E, residual)
:::{.callout-note}
### For fluorescence images, single channels are typically collected, so this isn't needed.
:::
- When you select `image type` on import, default separation is performed
## Start with a clean copy
- Let's start by duplicating our image:
- Analysis Pane ‣ Projects tab
- Right-click the image and select `Duplicate`
- **Un-check** `Also duplicate data files`
![](images/duplicate_image_no_data.png){fig-alt="Screenshot of QuPath `Duplicate image` dialog, with `Also duplicate data files` in red." fig-align="center"}
## Setting stain vectors
- Use Annotation tools to make a small Annotation that includes:
- some background
- some pixels of each stain
- Analyze ‣ Estimate stain vectors
![](images/Analyze_stain_vectors.png){fig-alt="Screenshot of QuPath Analyze menu with `Estimate stain vectors` highlighted in blue." fig-align="center"}
## Estimate stain vectors
![](images/Est_stain_vec_modal.png){fig-alt="Screenshot of QuPath `Estimate stain vectors` dialog`." fig-align="center"}
- QuPath has detected that the most common pixel value (the mode) is different from the current background
- Click `Yes` to set the new value
## Visual stain editor
![](images/visual_stain_editor.png){fig-alt="Screenshot of QuPath `Visual stain editor` dialog with red arrows indicating the handles of the stain vectors, as well as the Auto button and the `exclude unrecognized stains` checkbox." fig-align="center"}
## Visual stain editor
- At the bottom are settings for `Auto` vector estimation
- Click `Auto` to get an initial estimate
- Look at the color panels, which are views at the 3D color space from orthogonal sides
- if you have other pixels, can try to exclude them (checkbox) and run Auto again
- want the vectors to encompass the reddish-pink-purple pixels (those of H&E)
- Can manually tweak the vectors by using the handles
## Checking the results: Brightness & Contrast
![](images/brightness_contrast.png){fig-alt="Screenshot of QuPath `Brightness & Contrast` dialog with red arrows indicating the handles of the stain vectors, as well as the Auto button and the `exclude unrecognized stains` checkbox." fig-align="center"}
- Want visual separation between stains and low residual
## Stain vectors exercise
1. Make a duplicate image, uncheck `Also duplicate data`, and name it `vectors` (or similar)
2. Draw a small Annotation that includes both stains and background, ~30-50% background
3. Use `Analyze ‣ Estimate stain vectors` and the `Visual stain editor` to adjust the stain deconvolution
4. Use `Brightness & Contrast` to check the H&E color channels in different areas
# Automated Annotations via Thresholding
## Thresholding in QuPath
- Thresholding is a type of `Pixel Classification`
- Is a given pixel above or below a certain numeric value?
- Assign it to a Class
- Most commonly used to create Annotations, e.g. Tissue vs. background
- Menu: Classify ‣ Pixel classification ‣ Create thresholder
![](images/classify_thresholder.png){fig-alt="Screenshot of QuPath menus: Classify > Pixel classification, with `Create thresholder` highlighted in blue." fig-align="center"}
## Pixel classification ‣ Create thresholder
![](images/Classify_create_thresholder.png){fig-alt="Screenshot of QuPath `Create thresholder` dialog highlighted in blue." fig-align="center"}
## Thresholder parameters
- Resolution: lower resolution is less computationally intensive, but loses details
- Channel: channel(s) to compare vs. threshold
- for RGB images or tissue detection, `Average channels` can be a good choice
- Prefilter: preprocessing to apply prior to thresholding
- Gaussian is typical, to smooth out noise
- Smoothing sigma: radius of prefilter, in pixels
## Thresholder parameters
- **Threshold**: numeric value that will be used
:::{.callout-note}
- For RGB, white is (255, 255, 255) and black is (0, 0, 0)
- For intensity channels (e.g. stains), 0 is background and higher values represent signal
:::
- Above threshold: the class that these pixels will be assigned to
- Below threshold: the class that these pixels will be assigned to
:::{.callout-tip}
#### Double-click on `Above` or `Below` to swap classes!
:::
## Thresholder post-processing
:::{.callout-important}
### You need to save your thresholder before you can use it!
:::
- Measure: requires existing Annotations, gives areas and % of classes
- **Create objects**: creates objects and assigns them to the classes
- Classify: assigns Detections to the classes
## Thresholder: Create objects
![](images/Pixel_classifier_choose_parents.png){fig-alt="Screenshot of QuPath `Pixel classifier` dialog showing `Full image`." fig-align="center"}
- Choose the parent of the new objects, generally for Tissue you want it to be the image
## Thresholder: Create objects
![](images/Create_objects.png){fig-alt="Screenshot of QuPath `Create objects` dialog." fig-align="center"}
We are segmenting 4 large tissue slices, so we want to make:
- Annotations, that are large, without holes
- Separate objects (of the same class) for each slice
## Thresholder: Load classifier to reuse
We saved our classifier (thresholder), so we can re-use it:
![](images/load_pixel_classifier_menu.png){fig-alt="Screenshot of QuPath `Classify` menu showing the `Load classifier` dialog." fig-align="center"}
![](images/load_pixel_classifier.png){fig-alt="Screenshot of QuPath `Load pixel classifier` dialog." fig-align="center"}
## Thresholder exercise
- Duplicate your stain vectors image and delete any Annotations
- Zoom so you see 1 or 2 whole tissue slices
- Open `Pixel classification ‣ Create thresholder`
- Adjust parameters until you have the 4 tissue slices as class `Tissue`
- Save the classifier as `Tissue` (or similar)
- Use `Create objects` to make 4 annotations, one for each slice
# Sharing QuPath projects
## QuPath projects are portable
### Sharing a QuPath project:
- zip up the entire project directory
- email or share it with a colleague
:::{.callout-important}
**Ensure they can access the actual image files!
The project folder only contains QuPath objects and data!
The image files will not be inside the project (unless you had placed them there)!**
:::
## Receiving a QuPath project
:::{.callout-important}
#### You will need to re-link the project to the location of the actual image files
:::
![](images/project_Update_URIs.png){fig-alt="Screenshot of QuPath `Update URIs` dialog." fig-align="center"}
## Re-linking images: Update URIs
#### Double-click each red image and choose the new location individually:
![](images/project_update_URIs_2.png){fig-alt="Screenshot of QuPath `Update URIs` dialog showing the `Choose URI` dialog." fig-align="center"}
#### Or if all of the images are together, click `Search…` to find the new enclosing folder
# Back to pixel classification
## Thresholder measurement exercise
- Will need some parent objects: the Tissue annotations
- Duplicate the image with the Tissue annotations generated by the thresholder
:::{.callout-important}
### Ensure `Also duplicate data` is checked!
:::
- Ensure just there are just 4 Annotations, one for each tissue slice
## Thresholding within an Annotation
- Let's analyze the white areas within the `Tissue` Annotations
- We had ensured they were filled before to get the areas of the slices
- Create a new Class: `White` or `lumens`, etc.
- `Classify ‣ Pixel classification ‣ Create thresholder`
- Adjust the properties of your Thresholder to highlight the hole-like areas as the new Class, while the rest is `Stroma`
## Thresholding within an Annotation
- Once you have reasonable segmentation, give the classifier a meaningful name and save it
- To add area-based measurements to *our existing* `Tissue` Annotations, click `Measure`
![](images/pix_class_measure.png){fig-alt="Screenshot of QuPath `Pixel classifier` dialog for creating measurements in Annotations." fig-align="center"}
## Creating objects within an Annotation
- Use `Classify ‣ Pixel classification ‣ Load pixel classifier` and load the previous classifier
- Click `Create objects` and select `All Annotations` for `Choose parent objects`:
![](images/pixel_class_parents_annotations.png){fig-alt="Screenshot of QuPath `Pixel classifier` dialog" fig-align="center"}
## Create objects within an Annotation
- We want one Annotation of the new class *per tissue slice*
- Select `Annotation` as `New object type`
- Enter a `Minimum object size`, the area of the things we want to detect, the lumens/holes, maybe 200 µm^2^?
- Enter a similar value for `Minimum hole size`—this would be holes *within* the objects we are interested in!
:::{.callout-important}
### Ensure `Split objects` is **not** checked!
:::
## Create objects dialog
![](images/create_objects_dialog_holes.png){fig-alt="Screenshot of QuPath `Create objects` dialog with `Split objects` unchecked, highlighted in red." fig-align="center"}
# Training a Pixel classifier
## Going beyond thresholds
- QuPath includes built in support for machine learning pixel classification
- Classifiers can be trained, saved, and used for inference (classification)
- Instead of a simple numeric cutoff, training uses Annotations of two *or more* classes and a number of different channels & `features` (computed values per pixel)
## Training a pixel classifier: setup
- We're going to look at sub-regions of the tissue slices, so will need some parent objects, the Tissue annotations
- Duplicate the image with the original Tissue annotations generated by the thresholder
:::{.callout-important}
### Ensure `Also duplicate data` is checked!
:::
- Ensure just there are just 4 Annotations, one for each tissue slice
## Training a pixel classifier: setup
- Ensure you have three classes:
- e.g. Stroma, border, lumens
- Use the Polyline tool (V) to `paint` a few squiggles in:
- the dark purple/brown border areas, set class `border`
- the white `holes`, set class `lumens`
- other parts of the tissue slice, set class `Stroma`
:::aside
We could make even more...
:::
## Training a pixel classifier
`Classify ‣ Pixel classification ‣ Train pixel classifier`
![](images/classify_pix_train_pix.png){fig-alt="Screenshot of QuPath `Classify` menu showing the `Pixel classification` submenu with `Train pixel classifier` selected." fig-align="center"}
## Training a pixel classifier
![](images/train_pix_classifier.png){fig-alt="Screenshot of QuPath `Pixel classifier` dialog used to train the pixel classifier." fig-align="center"}
## Pixel classifier: Key settings
- Classifier: type of classifier, for initial work recommend `Random trees`, less of a black box can provide importance of features vs. ANN_MLP
- Resolution: as before, controls the level of detail used as a trade-off to performance
- Features: additional computed features to be used by the classifier
- Output: single class per pixel or estimated probability of classes
## Pixel classifier: Random trees
![](images/Pix_class_RTrees.png){fig-alt="Screenshot of QuPath `Pixel classifier` dialog with the `Random trees` parameters shown. The `Edit` button and `Calculate variable importance` are marked in red." fig-align="center"}
## Pixel classifier: Random trees: variable importance
![](images/Pix_class_Rtrees_var_import.png){fig-alt="Screenshot of QuPath `Log` showing `Variable importance`." fig-align="center"}
## Pixel classifier: selecting features
![](images/Pix_class_select_features.png){fig-alt="Screenshot of QuPath `Pixel classifier` dialog with the `Select features` dialog shown. The `Edit` button, `Channels`, `Scales`, and `Features` are marked in red." fig-align="center"}
## Pixel classifier: selecting features
- Channels: choose channels that can discriminate the classes, typically the stains
- Scales: controls the size of the smoothing/processing kernel, so can help identify (or suppress) details
- Features: additional calculated transformations, can check their importance in the log (for RTrees only)
:::{.callout-tip}
#### Typically, *fewer*, but *well-chosen* features provides more robust results.
:::
## Pixel classifier: features
| | From QuPath docs |
|---------|---------|
| *Gaussian filter* | General-purpose |
| Laplacian of Gaussian | Blob-like things, some edges |
| *Weighted deviation* | Textured vs. smooth areas |
| Gradient magnitude | Edges |
| Structure tensor eigenvalues | Long, stringy things |
| Structure tensor coherence | ‘Oriented’ regions |
| Hessian determinant | Blob-like things |
| Hessian eigenvalues | Long, stringy things |
## Pixel classifier: visualizing features
![](images/Pix_class_show_features.png){fig-alt="Screenshot of QuPath `Pixel classifier` dialog with the `Show classification` dropdown marked in red and showing a feature." fig-align="center"}
:::{.callout-tip}
#### Use the `C` key and opacity slider!
:::
## Pixel classifier: advanced options
![](images/Pix_class_advanced_options.png){fig-alt="Screenshot of QuPath `Pixel classifier` dialog with the `Advanced options` dialog shown. The `Maximum samples` and `Reweight samples` options are marked in red." fig-align="center"}
## Pixel classifier: advanced options
- Maximum samples: number of pixels from training Annotations used, if exceeded pixels will be sampled randomly
:::{.callout-tip}
#### Typically, using Polyline Annotation squiggles gives best results
:::
- Reweight samples: weighs the training pixels by class balance
:::{.callout-important}
#### Does not let you ignore class balance totally!
:::
## Pixel classifier: exercise
Using `Random trees` classifier, test out impact of classifier parameters, especially:
- Resolution
- Features: use the `Log` to check importance
:::{.callout-important}
#### You can add more Annotations!
:::
When reasonable, save the classifier and your image with training Annotations
:::{.callout-tip}
#### Load and run your classifier (`Create objects`) on a fresh copy of the 4 Annotation image
:::
## Pixel classifier: final tips
- Can be easiest to make a project dedicated to training a classifier
- Can annotate multiple images, save them, and then use `Load training` to add them to the training data
- Once saved, you can copy the classifier to a new project for inference or use `Import from file` in the `Load pixel classifier` dialog
:::{.callout-important}
#### Make a note of your features & parameters, in case you want to train again!
:::
# Key concept: QuPath objects
## QuPath is more than a viewer
### Workflow:
- View pixels
- Identify objects
- Identify child objects
- Count
- Measure
- Classify
## QuPath Objects
### All QuPath objects have:
- a name
- a region of interest (ROI)
- a class
- measurements/features
## Annotations
- Large, **editable** objects
- Manually made or via pixel classifier
- Few default measurements (shape related)
- More "costly" computationally
:::{.callout-important}
#### You want to have few Annotations and use them as *parent objects*
:::
## Detections
- Small(er), **non-editable** objects
- Generated by QuPath, within a parent object
- Many default measurements (shape and intensity)
- "Cheap" computationally, can have 100K+
:::{.callout-important}
#### Detections can be classified!
:::
## Special Detections
### Cells
- two ROI: cell boundary and nucleus (optional)
- have shape and intensity measurements for both
### Tiles
- groups of pixels (superpixels)
:::{.callout-important}
### Superpixels don't have any default measurements
:::
- if classified, can be merged into Annotations
## Measurements & features
- Measurements & features can be summarized in tables for annotations and detections
- Default measurements include shape-related measurements and intensity-related measurements (Detections)
- Can be used to train a machine learning *object classifier*
- Additional measurements/features can be added using: `Analyze ‣ Calculate features`
## Adding additional measurements
- `Smoothed features`: use a weighted average of the corresponding measurements of neighboring objects
- `Intensity features`: mean, SD, etc. of channel intensities
- `Shape features`: Area, perimeter, caliper, etc.
:::{.callout-note}
### By default:
- Annotations have only shape-related measurements
- Cell detections have both shape and intensity measurements
- Tile detections have no measurements
:::
# Cell detection
## QuPath cell detection
- built-in cell segmentation algorithm, based on thresholding
- automatic tiling and parallel processing
- detects nucleus and then expands to get cell body
- obtained Cell Detections will have shape and intensity measurements for both nucleus and cell
:::{.callout-note}
#### ML approaches including Cellpose and Stardist are available as extensions
:::
## Detecting cells: setup
- Will need some parent objects: the Tissue annotations
- Duplicate the image with the Tissue annotations generated by the thresholder
:::{.callout-important}
### Ensure `Also duplicate data` is checked!
:::
- Detection can be computationally intensive, so best to start with a small region
- Draw a small rectangle ROI (500 µm x 500 µm), ensure it's selected
## Detecting cells
`Analyze ‣ Cell detection ‣ Cell detection`
![](images/menu_cell_detection.png){fig-alt="Screenshot of QuPath `Analyze` menu showing the `Cell detection` submenu." fig-align="center"}
:::{.callout-note}
`Positive cell detection` allows setting additional intensity thresholds for scoring (e.g. DAB)
:::
## Cell detection dialog (top)
![](images/cell_detection_top.png){fig-alt="Screenshot of top portion of QuPath `Cell detection` dialog" fig-align="center"}
## Cell detection parameters: setup
:::{.callout-note}
#### Defaults are frequently pretty good!
:::
- Detection image: the channel where nuclei are most apparent, typically `Hematoxylin OD`, but `OD Sum` can be useful for H-DAB
:::{.callout-note}
#### Use B&C dialog to take a look!
:::
- Requested pixel size: the resolution of the image used for the processing, enter `0` for native (full) resolution. Trade-off of speed vs. detail, 0.5 or 1 frequently good.
## Cell detection parameters: nucleus
- Background radius: radius of area used for background subtraction, typically want it to be larger than the size of nuclei/clusters or set to `0` to disable
- Opening by reconstruction: turn off if background isn't homogenous and you see tiling artifacts
- Median filter radius: radius of median filter (smoothing), higher values will tend to reduce fragmentation, `0` to disable
- Sigma: radius of gaussian filter (smoothing), higher values will tend to reduce fragmentation
## Cell detection dialog (middle)
![](images/cell_detection_middle.png){fig-alt="Screenshot of middle portion of QuPath `Cell detection` dialog" fig-align="center"}
## Cell detection parameters: nucleus, intensity
- Minimum area: area of smallest nuclei
- Maximum area: area of largest nuclei
- Threshold: minimum signal of nuclei (relative to background)
- Max background intensity: if background subtraction was performed, can have regions ignored (e.g. tissue fold or staining artifact)
- Split by shape: uses roundness to split clusters/clumps, generally want this ticked
## Cell detection dialog (bottom)
![](images/cell_detection_bottom.png){fig-alt="Screenshot of bottom portion of QuPath `Cell detection` dialog" fig-align="center"}
## Cell detection parameters: cell, general
- Cell expansion: how much to expand nuclei by to get cell borders
- can make it small to get peri-nuclear
- Include nucleus: to get nuclei as ROI ensure this is ticked
- Smooth boundaries: smooths based on resolution, reduces memory usage, generally want it ticked
- Make measurements: want this ticked to get measurements for the generated Detections
## Cell detection exercise
- Using your rectangular Annotation, test out the cell detection parameters, by re-running cell detection with different parameters e.g.
- Requested pixel size
- Median filter radius
- Sigma
:::{.callout-tip}
- You can see the number of counted cells in the Analysis pane Annotation tab
- Press `D` to toggle showing Detections or `F` to toggle their Fill
:::
## Running cell detection on our slices
:::{.callout-important}
#### Depending on the parameters and power of your computer this can take a few minutes!
:::
- Delete the rectangular Annotation and *don't keep descendants* (the test Detections)
- Select the Annotation of the slice you want to detect (or select all 4)
- Click `Run` again if you have the `Cell detection` dialog open
:::{.callout-important}
#### `Cell detection` parameters are not saved. If you closed the dialog your settings are lost, but can be recovered in the Analysis pane Workflow tab
:::
## Using the Workflows tab
![](images/Workflow_cell_detection.png){fig-alt="Inset of screenshot of QuPath application window (top left) showing the Analysis pane Workflow tab." fig-align="center"}
:::{.callout-tip}
#### Double click the Workflow step you want to re-run!
:::
## Using the Workflows tab
- Workflow steps are saved for each image individually
- Can create scripts from Workflows to easily repeat/reapply an analysis
:::{.callout-tip}
#### Use `Create workflow` button to remove any spurious steps prior to `Create script`
:::
- Can also copy Workflow steps individually using right-click `Copy command` to paste into a script
## Detection measurements
- Can see measurements in the Measurements table (`Show detection measurements`)
- More export options: `Measure ‣ Export measurements`
- Can also visualize using "heat" maps of any Detection measurements: `Measure ‣ Show measurements maps`
- can be informative for `Object classification`
:::{.callout-tip}
#### `Smoothed measurements` take into account local area of each cell and can be even more useful