-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Microns data cutout, EmptyVolumeException #544
Comments
While the default visualization in neuroglancer is at a [4,4,40] nm/voxel resolution, the actual MIP-0 image data is in [8,8,40] nm/voxel, so you need to adjust your bounding box accordingly. Alternatively, if you use the |
Thanks for your help! |
I think that at the moment you have to either look at the docstring or the code itself:
|
Thanks! But it seems like this is beyond my skill, it's unclear to me how to use vol.download or use the coord_resolution argument. Although less straightforward at first glance, maybe converting the nm/voxel size in my original command might work sooner. But I guess it's also not just a matter of multiplying the x/y values by 2, because then it tells me I'm outside the inclusive range. Could you give some guidance as to how to adjust the bounding box? Sorry for all the questions... |
Hi Koen,
Just use the original bounding box and set coord_resolution=[4,4,40] and
see if that works.
Will
…On Fri, Jul 1, 2022, 7:41 AM KoenKole ***@***.***> wrote:
I think that at the moment you have to either look at the docstring or the
code itself:
https://github.com/seung-lab/cloud-volume/blob/731a646210a768a54320b8b0b8202704eb789d59/cloudvolume/frontends/precomputed.py#L647
Thanks! But it seems like this is beyond my skill, it's unclear to me how
to use vol.download or use the coord_resolution argument.
Although less straightforward at first glance, maybe converting the
nm/voxel size in my original command might work sooner. But I guess it's
also not just a matter of multiplying the x/y values by 2, because then it
tells me I'm outside the inclusive range. Could you give some guidance as
to how to adjust the bounding box? Sorry for all the questions...
—
Reply to this email directly, view it on GitHub
<#544 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AATGQSJD3ICB45XZNSOAZBLVR3KP5ANCNFSM52MEZSKA>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Thanks Will! Wasn't sure where to add it, so I tried two ways but with no success. I tried: from cloudvolume import CloudVolume^M ValueError Traceback (most recent call last) ValueError: too many values to unpack (expected 2) and: from cloudvolume import CloudVolume^M TypeError Traceback (most recent call last) TypeError: new() got an unexpected keyword argument 'coord_resolution' ==== Maybe I'm trying to use the argument in the wrong line? The last error (unexpected keyword) also made me think that maybe I'm using an older version where this function isn't available yet, but I tried reinstalling Cloudvolume with no change. |
Try
from cloudvolume import CloudVolume
import numpy as np
cv = CloudVolume("precomputed://gs://iarpa_microns/minnie/minnie65/seg",
mip=0, use_https=True)
bounds = np.s_[380410:380726, 289810:289982, 20058:20092]
img = cv.download(bounds, mip=1, coord_resolution=[4,4,40])
…On Fri, Jul 1, 2022, 11:12 AM KoenKole ***@***.***> wrote:
Thanks Will!
Wasn't sure where to add it, so I tried two ways but with no success.
I tried: from cloudvolume import CloudVolume^M
...: vol =
CloudVolume("precomputed://gs://iarpa_microns/minnie/minnie65/seg", mip=0,
use_https=True)^M
...: cutout = vol[380410:380726, 289810:289982, 20058:20092],
coord_resolution=[4,4,40]^M
...:
ValueError Traceback (most recent call last)
in
1 from cloudvolume import CloudVolume
2 vol =
CloudVolume("precomputed://gs://iarpa_microns/minnie/minnie65/seg", mip=0,
use_https=True)
----> 3 cutout = vol[380410:380726, 289810:289982, 20058:20092],
coord_resolution=[4,4,40]
ValueError: too many values to unpack (expected 2)
=====
and: from cloudvolume import CloudVolume^M
...: vol =
CloudVolume("precomputed://gs://iarpa_microns/minnie/minnie65/seg", mip=0,
use_https=True, coord_resolut
...: ion=[4,4,40])^M
...:
TypeError Traceback (most recent call last)
in
1 from cloudvolume import CloudVolume
----> 2 vol =
CloudVolume("precomputed://gs://iarpa_microns/minnie/minnie65/seg", mip=0,
use_https=True, coord_resolution=[4,4,40])
TypeError: *new*() got an unexpected keyword argument 'coord_resolution'
====
Maybe I'm trying to use the argument in the wrong line? The last error
(unexpected keyword) also made me think that maybe I'm using an older
version where this function isn't available yet, but I tried reinstalling
Cloudvolume with no change.
—
Reply to this email directly, view it on GitHub
<#544 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AATGQSLGI55IDCCWWN6ZVJTVR4DEXANCNFSM52MEZSKA>
.
You are receiving this because you commented.
Message ID: ***@***.***>
|
I tried, but still no luck unfortunately: In [1]: from cloudvolume import CloudVolume^M EmptyVolumeException Traceback (most recent call last) ~\anaconda3\lib\site-packages\cloudvolume\frontends\precomputed.py in download(self, bbox, mip, parallel, segids, preserve_zeros, agglomerate, timestamp, stop_layer, renumber, coord_resolution) ~\anaconda3\lib\site-packages\cloudvolume\datasource\precomputed\image_init_.py in download(self, bbox, mip, parallel, location, retain, use_shared_memory, use_file, order, renumber) ~\anaconda3\lib\site-packages\cloudvolume\datasource\precomputed\image\rx.py in download_sharded(requested_bbox, mip, meta, cache, lru, spec, compress, progress, fill_missing, order, background_color) ~\anaconda3\lib\site-packages\cloudvolume\datasource\precomputed\image\rx.py in decode(meta, input_bbox, content, fill_missing, mip, background_color) ~\anaconda3\lib\site-packages\cloudvolume\datasource\precomputed\image\rx.py in _decode_helper(fn, meta, input_bbox, content, fill_missing, mip, background_color) EmptyVolumeException: Bbox([94984, 72370, 20034],[95112, 72498, 20066], dtype=int32) |
Hmm, that's not great. Unfortunately I'm moving at the moment and can't
debug the code for a few days.
You can also try dividing your bounding box by 2x2x1 to get mip 1 from mip
0.
There's also a cloud volume method cv.bbox_to_mip that may be helpful for
translating bounding boxes between mips
…On Fri, Jul 1, 2022, 12:21 PM KoenKole ***@***.***> wrote:
I tried, but still no luck unfortunately:
In [1]: from cloudvolume import CloudVolume^M
...: import numpy as np^M
...: ^M
...: cv =
CloudVolume("precomputed://gs://iarpa_microns/minnie/minnie65/seg",^M
...: mip=0, use_https=True)^M
...: ^M
...: bounds = np.s_[380410:380726, 289810:289982, 20058:20092]^M
...: img = cv.download(bounds, mip=1, coord_resolution=[4,4,40])^M
...:
Decompressing: 0it [00:00,
?it/s]█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████|
2/2 [00:00<00:00, 3.75it/s]
EmptyVolumeException Traceback (most recent call last)
in
6
7 bounds = np.s_[380410:380726, 289810:289982, 20058:20092]
----> 8 img = cv.download(bounds, mip=1, coord_resolution=[4,4,40])
~\anaconda3\lib\site-packages\cloudvolume\frontends\precomputed.py in
download(self, bbox, mip, parallel, segids, preserve_zeros, agglomerate,
timestamp, stop_layer, renumber, coord_resolution)
706 parallel = self.parallel
707
--> 708 tup = self.image.download(
709 bbox.astype(np.int64), mip, parallel=parallel, renumber=bool(renumber)
710 )
~\anaconda3\lib\site-packages\cloudvolume\datasource\precomputed\image_
*init*_.py in download(self, bbox, mip, parallel, location, retain,
use_shared_memory, use_file, order, renumber)
154 scale = self.meta.scale(mip)
155 spec = sharding.ShardingSpecification.from_dict(scale['sharding'])
--> 156 return rx.download_sharded(
157 bbox, mip,
158 self.meta, self.cache, self.lru, spec,
~\anaconda3\lib\site-packages\cloudvolume\datasource\precomputed\image\rx.py
in download_sharded(requested_bbox, mip, meta, cache, lru, spec, compress,
progress, fill_missing, order, background_color)
90 for zcode, chunkdata in itertools.chain(io_chunkdata.items(),
lru_chunkdata):
91 cutout_bbox = code_map[zcode]
---> 92 img3d = decode_fn(
93 meta, cutout_bbox,
94 chunkdata, fill_missing, mip,
~\anaconda3\lib\site-packages\cloudvolume\datasource\precomputed\image\rx.py
in decode(meta, input_bbox, content, fill_missing, mip, background_color)
576 Returns: ndarray
577 """
--> 578 return _decode_helper(
579 chunks.decode,
580 meta, input_bbox,
~\anaconda3\lib\site-packages\cloudvolume\datasource\precomputed\image\rx.py
in _decode_helper(fn, meta, input_bbox, content, fill_missing, mip,
background_color)
627 content = b''
628 else:
--> 629 raise EmptyVolumeException(input_bbox)
630
631 shape = list(bbox.size3()) + [ meta.num_channels ]
EmptyVolumeException: Bbox([94984, 72370, 20034],[95112, 72498, 20066],
dtype=int32)
—
Reply to this email directly, view it on GitHub
<#544 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AATGQSO27ADCREIJZVW5D23VR4LHRANCNFSM52MEZSKA>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Hi Koen, I finally have my laptop and internet so I took a look. It seems this copy of minnie65 does not have a fake 4nm base resolution, so if you look at the segmentation in Neuroglancer you can read the coordinates right off. The largest X dimension is 218808, so the coordinates you provided are far too large. The coordinates you should provide to CloudVolume are in units of voxels, not physical units, so you'll need to divide by the resolution at minimum to be able to work with it. |
Hi William, Many thanks for taking a look, I very much appreciate your help! These are the dimensions I see (and the ones I'm trying to get): So therefore I was assuming I was in the right range. from cloudvolume import CloudVolume Decompressing: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<?, ?it/s] and then import tifffile I do get a TIFF file with specific dimensions that seem to correspond to the ones I used in the command, but all the images are black. I tried dividing the coordinates, but then I get back to the same errors as before (EmptyVolumeException). |
I tend to find that the easiest way to work with bounding boxes is the cloudvolume Bbox class. In this case, for example, what I would have done is: coords = [[190205, 144905, 20058], [190363, 144991, 20090]]
bbox = cloudvolume.Bbox(coords[0], coords[1])
img = cv.download(bbox, coord_resolution=[4,4,40]) This downloads a perfectly good cutout for me. I suspect that the reason you're only seeing blacks is that the tiff is being interpreted as 16 bit, when it should be 8 bit (0-255). One other note: are you intending to download the downsampled 16,16,40 imagery? That is what the mip=1 argument is doing in the download function, because mip 0 is already 8,8,40 (you can check this with |
Thanks! Then maybe I'm closer than I thought :) |
Scanning the examples for tifffile, it looks like it inherits the bit size from the dtype of the numpy array. Try setting the dtype of the downloaded array to an 8 bit unsigned integer when you pass it to the imwrite: |
Oh, I see, you're downloading the segmentation. This looks correct for that. Segmentation data has the 64-bit object id in each voxel of the downloaded array. If you want to be doing that (as opposed to imagery), then you also have to convert the int64 segment ids to random colors for visualization. I recommend using Will's fastremap package to convert them to small integers and then make a lookup table of colors using any color palette package. I have an example of doing this in a package I made a while ago called ImageryClient, but it might not work right now with recent cloudvolume versions — I need to do some work to bring it up to date with a lot of useful changes to cloudvolume in the last year or so. |
Glad that worked for you Koen! re: coloring segmentation, I recently found this interesting library that uses fastremap under the hood. Might be useful at some point! You can also colorize segmentation with matplotlib color maps too. |
Oh, and set mip=[8,8,40] or lower resolution to find layers with data in
them. Mips can also be set as mip=1
On Fri, Jul 1, 2022, 10:28 AM William Silversmith <
***@***.***> wrote:
… Hi Koen,
Just use the original bounding box and set coord_resolution=[4,4,40] and
see if that works.
Will
On Fri, Jul 1, 2022, 7:41 AM KoenKole ***@***.***> wrote:
> I think that at the moment you have to either look at the docstring or
> the code itself:
>
>
> https://github.com/seung-lab/cloud-volume/blob/731a646210a768a54320b8b0b8202704eb789d59/cloudvolume/frontends/precomputed.py#L647
>
> Thanks! But it seems like this is beyond my skill, it's unclear to me how
> to use vol.download or use the coord_resolution argument.
>
> Although less straightforward at first glance, maybe converting the
> nm/voxel size in my original command might work sooner. But I guess it's
> also not just a matter of multiplying the x/y values by 2, because then it
> tells me I'm outside the inclusive range. Could you give some guidance as
> to how to adjust the bounding box? Sorry for all the questions...
>
> —
> Reply to this email directly, view it on GitHub
> <#544 (comment)>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AATGQSJD3ICB45XZNSOAZBLVR3KP5ANCNFSM52MEZSKA>
> .
> You are receiving this because you are subscribed to this thread.Message
> ID: ***@***.***>
>
|
Thanks a lot Will! I will have look!
…________________________________
From: William Silversmith ***@***.***>
Sent: Sunday, July 3, 2022 2:21:37 AM
To: seung-lab/cloud-volume ***@***.***>
Cc: Koen Kole ***@***.***>; Author ***@***.***>
Subject: Re: [seung-lab/cloud-volume] Microns data cutout, EmptyVolumeException (Issue #544)
Hi Koen,
I finally have my laptop and internet so I took a look. It seems this copy of minnie65 does not have a fake 4nm base resolution, so if you look at the segmentation in Neuroglancer you can read the coordinates right off.
The largest X dimension is 218808, so the coordinates you provided are far too large. The coordinates you should provide to CloudVolume are in units of voxels, not physical units, so you'll need to divide by the resolution at minimum to be able to work with it.
—
Reply to this email directly, view it on GitHub<https://eur05.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fseung-lab%2Fcloud-volume%2Fissues%2F544%23issuecomment-1172983373&data=05%7C01%7Ck.kole%40nin.knaw.nl%7C693e41e8be234632ba8208da5c89ff1f%7Cac4b0eb0811e4ab0acca8a87079290dc%7C0%7C0%7C637924045017031262%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=wGpi9bkp%2Bi25mKEqlYT%2B%2BAeDjLBDaAraO5Q7jF2pBkI%3D&reserved=0>, or unsubscribe<https://eur05.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAYSYCEFQBL4AV2U57XMUGYDVSDMJDANCNFSM52MEZSKA&data=05%7C01%7Ck.kole%40nin.knaw.nl%7C693e41e8be234632ba8208da5c89ff1f%7Cac4b0eb0811e4ab0acca8a87079290dc%7C0%7C0%7C637924045017031262%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=bm1g4EiCyb6OojK9Fec1PQ%2F2eF5qkBannMDBaYEAWro%3D&reserved=0>.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Hello!
Using CloudVolume I'm trying to get a cutout of some of the Microns EM data so I can further process it locally as a TIFF.
I'm using the code below which has worked previously on another EM dataset on Neurodata, but unfortunately I can't seem to get it to work for Microns and I get an EmptyVolumeException. I'm pretty new to EM data and command line so there might be something very obvious that I'm overlooking. Any idea what I might be doing wrong?
Many thanks in advance for your help!
Koen
======
from cloudvolume import CloudVolume^M
...: vol = CloudVolume("precomputed://gs://iarpa_microns/minnie/minnie65/seg", mip=0, use_https=True)^M
...: cutout = vol[190205:190363, 144905:144991, 20058:20092]^M
...:
Decompressing: 0it [00:00, ?it/s]█████████████████████▌ | 1/2 [00:00<00:00, 4.48it/s]
EmptyVolumeException Traceback (most recent call last)
in
1 from cloudvolume import CloudVolume
2 vol = CloudVolume("precomputed://gs://iarpa_microns/minnie/minnie65/seg", mip=0, use_https=True)
----> 3 cutout = vol[190205:190363, 144905:144991, 20058:20092]
~\anaconda3\lib\site-packages\cloudvolume\frontends\precomputed.py in getitem(self, slices)
526 requested_bbox = Bbox.from_slices(slices)
527
--> 528 img = self.download(requested_bbox, self.mip)
529 return img[::steps.x, ::steps.y, ::steps.z, channel_slice]
530
~\anaconda3\lib\site-packages\cloudvolume\frontends\precomputed.py in download(self, bbox, mip, parallel, segids, preserve_zeros, agglomerate, timestamp, stop_layer, renumber, coord_resolution)
706 parallel = self.parallel
707
--> 708 tup = self.image.download(
709 bbox.astype(np.int64), mip, parallel=parallel, renumber=bool(renumber)
710 )
~\anaconda3\lib\site-packages\cloudvolume\datasource\precomputed\image_init_.py in download(self, bbox, mip, parallel, location, retain, use_shared_memory, use_file, order, renumber)
154 scale = self.meta.scale(mip)
155 spec = sharding.ShardingSpecification.from_dict(scale['sharding'])
--> 156 return rx.download_sharded(
157 bbox, mip,
158 self.meta, self.cache, self.lru, spec,
~\anaconda3\lib\site-packages\cloudvolume\datasource\precomputed\image\rx.py in download_sharded(requested_bbox, mip, meta, cache, lru, spec, compress, progress, fill_missing, order, background_color)
90 for zcode, chunkdata in itertools.chain(io_chunkdata.items(), lru_chunkdata):
91 cutout_bbox = code_map[zcode]
---> 92 img3d = decode_fn(
93 meta, cutout_bbox,
94 chunkdata, fill_missing, mip,
~\anaconda3\lib\site-packages\cloudvolume\datasource\precomputed\image\rx.py in decode(meta, input_bbox, content, fill_missing, mip, background_color)
576 Returns: ndarray
577 """
--> 578 return _decode_helper(
579 chunks.decode,
580 meta, input_bbox,
~\anaconda3\lib\site-packages\cloudvolume\datasource\precomputed\image\rx.py in _decode_helper(fn, meta, input_bbox, content, fill_missing, mip, background_color)
627 content = b''
628 else:
--> 629 raise EmptyVolumeException(input_bbox)
630
0it [00:00, ?it/s]list(bbox.
size3()) + [ meta.num_channels ]
EmptyVolumeException: Bbox([190225, 144868, 20034],[190353, 144996, 20066], dtype=int32)
The text was updated successfully, but these errors were encountered: