You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm looking to use Neuroglancer to visualize this dataset. To achieve this, I have converted all the .tif files to .h5 format with the following script:
importosimporth5pyimporttifffileastifffromtqdmimporttqdmtif_dir="/path/to/tif/folder/"output_file="/path/to/file.h5"image_files=sorted(
[os.path.join(tif_dir, f) forfinos.listdir(tif_dir) iff.endswith(".tif")]
)
first_image=tiff.imread(image_files[0])
img_shape=first_image.shapewithh5py.File(output_file, "w") ash5f:
# Create a dataset to hold the TIFF imagesdtype=first_image.dtype# Create the dataset with chunking and compressiondataset=h5f.create_dataset(
"images",
shape=(len(image_files),) +img_shape,
dtype=dtype,
chunks=(1, img_shape[0] //10, img_shape[1] //10), # Example chunkingcompression="gzip"
)
# Iterate through each TIFF file and write it to the HDF5 datasetfori, filenameinenumerate(tqdm(image_files, desc="Processing TIFF files")):
iffilename.endswith(".tif"):
filepath=os.path.join(tif_dir, filename)
image_data=tiff.imread(filepath)
dataset[i, :, :] =image_data
The .h5 file generated is very large (900 GB) and therefore cannot be loaded into RAM. From what I've read, you can pre-compute the contents of this file to display it in chunks on-demand with CloudVolume. I tried this with the following script:
Thanks for writing in. The first thing I would try, since you seem to be inserting these slices in an XY plane, is to set the chunk size to be [1024,1024,1], currently it is a single pixel in the x direction. Second, maybe try doing a transpose instead of a reshape? slice_data.T You can then do a np.squeeze. CloudVolume expects 4D arrays so add two trivial dimensions to the end:
I have 3620 images in
.tif
format, totalling 1TB.Here's their info:
I'm looking to use Neuroglancer to visualize this dataset. To achieve this, I have converted all the
.tif
files to.h5
format with the following script:The
.h5
file generated is very large (900 GB) and therefore cannot be loaded into RAM. From what I've read, you can pre-compute the contents of this file to display it in chunks on-demand with CloudVolume. I tried this with the following script:But on the way out I find myself with a size incompatibility problem:
I've tried changing dimensions or reversing the valuers for each slice, but I always end up with a problem like this...
Thanks for your help!
The text was updated successfully, but these errors were encountered: