You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We run into a problem when the original, high res data is not divisible by 2 and in a workflow, segmentation of initial objects (organoids) is performed at level 2. If the user then tries to segment single cells within organoids at level 0 (=> needs to load the level 2 organoid segmentation maps & upsample them to level 0 for masking), it fails here:
ValueError: Cannot convert highres_region=(slice(0, 227, None), slice(0, 580, None), slice(5540, 6524, None)), given lowres_shape=(227, 3885, 3585) and highres_shape=(227, 15543, 14342). Incommensurable sizes highres_size=15543 and lowres_size=3885.
The high-res image has a shape of 15543x14342. This happened because the original image was a search-first dataset: Images were put into the OME-Zarr grid based on their microscope stage coordinates, not as a grid.
The text was updated successfully, but these errors were encountered:
We run into a problem when the original, high res data is not divisible by 2 and in a workflow, segmentation of initial objects (organoids) is performed at level 2. If the user then tries to segment single cells within organoids at level 0 (=> needs to load the level 2 organoid segmentation maps & upsample them to level 0 for masking), it fails here:
fractal-tasks-core/fractal_tasks_core/upscale_array.py
Line 85 in 9ffc3c6
And we get an error like:
The high-res image has a shape of 15543x14342. This happened because the original image was a search-first dataset: Images were put into the OME-Zarr grid based on their microscope stage coordinates, not as a grid.
The text was updated successfully, but these errors were encountered: