- replace pillow with pypng for faster IO of png files.
- add operator skip-none
- add operator read-npy
- add support to visualize points in neuroglancer
- make read-json work with bbox in task
- add operator load-synapses
- add new class of Synapses. This class will handle all the synapse operations.
- fix thresholding threshold type
- neuroglancer operator input changed from
chunk_names
toinputs
to be consistent with other layers. - move agglomerate operator to a plugin to remove dependency of waterz. Waterz is not easy to install in MAC os and is not currently supported in python 3.9. After this change, we should be able to support python 3.9 now.
- replaced evaluate code from waterz to gala. The evaluate code is copied from gala following the license permission. This was also got agreement from the package author Juan
- change default connectivity from 26 to 6 in connected-components operator
- added synapse annotation visualization.
- add read-json operator
- The code of neuroglancer operator is splitted to functions and becomes more modular. The code is more readable and maintainable.
- add zero-filling option for load-h5 operator. If there is no such file, we fill it with zeros.
- add skip-task operator. if a result file already exists, we skip the task
- add skip-all-zero operator. if a chunk is all zero, we skip that task.
- add new patch inference backend: prebuilt. PR
- add new operator read-json
- fix chunk transpose with None voxel size and voxel offset
- write png files with Z corrdinate as file name
- aligned block size option for generating tasks. This is useful to force alignment of chunks, and is used to align segmentation chunks for DVID.
- add task number and current task index in global states. Thus, we know that a task could be the last one. This is useful to compute global statistics, such as maximum segmentation id in a segmentation and target indices of segmentation in DVID.
- new operators to read and write NRRD file
- add a plugin to cutout target chunk from DVID server.
- use python logging module instead of verbose parameter
- changed the
plugin
operator parameter input and output names. So it can accept both chunk and other data, such as bounding box. - the default downsampling factor changes from (1,2,2) to (2,2,2)
- default input to plugin changed from chunk to None
- quit neuroglancer by enter q and return.
- work with Flatiron disBatch.
- a new operator remap to renumber a serials of segmentation chunks.
- shard meshing. It is not validated yet. The data is written, but Neuroglancer is still not showing them. There is something like manifest missing.
- support downsample in Z
- support disbatch in manifest operator
- add voxel size of chunk
- automatically use chunk voxel size in neuroglancer operator
- mask out a chunk with smaller voxel size. The voxel size should be divisible though.
- plugin with bounding box and argument
- more options for generate-tasks operator
- fix manifest by updating cloud storage to CloudFiles
- fix load-h5 operator when only cutout size is provided
- fix grid size bug of generating tasks
- read-pngs parameter name change.
- rename several operators to make them more consistent: cutout --> read-precomputed, save --> write-precomputed, save-pngs --> write-pngs
- add a plugin to inverse image / affinitymap intensity
- replace global_offset to voxel_offset to be consistent with neuroglancer info
- change operator name
fetch-task
tofetch-task-from-sqs
to be more specific.
- output tasks as a numpy array and save as npy file.
- work with slurm cluster
- fetch task from numpy array
- fetch task from numpy array
- hdf5 file with cutout range
- fix neuroglancer visualization for segmentation.
- renamed custom-operator to plugin
- new operator: normalize-intensity
- a plugin system with a median filter example
- new operator: normalize-intensity
- support cutout from hdf5 file
- new operator: cutout-dvid-target
- cutout whole volume in default
- rename inference backend
general
touniversal
- threshold operator
- remove convnet inference backend
pytorch-multitask
since it could be included by thepytorch
backend.
- a general convnet inference backend
- support combined convnet inference including semantic and affinity map inference.
- all-zero option for create-chunk operator and inference test in travis
- new operator to delete chunk for releasing memory
- log-summary operator will work for combined inference
- added corresponding documentation
- added new operator
- make
verbose
a integer rather than boolean number for variation of verbosity.
- add setup-env operator to automatically compute patch number, cloud storage block size and other metadata. ingest tasks into AWS SQS queue. After this operation, you are ready to launch workers!
- support cropped output patch size for inference
- refactored normalize section contrast operator to make it faster and easier to use. We precompute a lookup table to avoid redundent computation of the voxel mapping.
- avoid creating a mask buffer by directly applying the high-mip mask to low-mip chunk
- fix a typo of thumbnail_mip
- add more complex production inference example
- remove c++ compilation module
- neuroglancer operator works with multiple chunks (#123)
- add connected components operator
- add iterative mean agglomeration operator, including watershed computation.
- tutorial for cleft and cell boundary detection (#123)
- a new operator called
save-images
to save chunk as a serials of PNG images. - add log directory parameter for benchmark script.
- queue becomes None
- pznet was not working.
- fix Dockerfile for pznet build (not sure whether breaks PyTorch or not)
- add kubernetes documentation to find monitor of operations per second and bandwidth usage.
- change the mask command parameter
mask-mip
tomip
- fix log uploading and cloud watch
- fix the misplaced log in task, which will make the inference log not working
- fix blackout section bug. Previous implementation will have indexing error while blacking out z outside of chunk.
- clarify patch, chunk and block.
- add realistic example for ConvNet inference.
- merge operator upload-log to save since we normally just put log inside saved volume.
- add batch size option for ConvNet inference
- better memory footprint by deleting the ascontiguousarray
- the inference framework was hard coded as identity, fixed this bug.
- updated the documentation to focus on chunk operators, not just ConvNet Inference.
- add option to blackout some sections
- add a new operator: normalize the sections using precomputed histogram
- add typing check for most of the function arguments and kwargs
- rename offset_array to chunk
- rename main.py to flow.py to match the package name
- added global parameters
- separate out cloud-watch operator
- rename the consume_task.py to main.py for better code meaning
- reuse operators in loop, so we only need to construct operator once.
- add operator name to add more than one operator with same class. For example, in the inference pipeline, we might mask out both image and affinitymap, the two mask operator should have different names.
- make all operators inherit from a base class
- move all operators to a separate folder for cleaner code.
- the default offset in read-file operator was not working correctly.
- processors for read/write hdf5 files
- processor for create fake image chunk for tests
- travis test for chunkflow commandline interface
- fix a bug of OffsetArray. The attribute changes after numpy operations. There is still some operation will change the attribute, will fix later.
- add documentation to release pypi package
- decompose all the code to individual functions to process chunks
- the command line usage is completely different
- no multiprocessing internally. it has to be implemented in shell script.
- composable commandline interface. much easier to use and compose operations.
the following texts are templates for adding change log