This is the official implementation of the paper CoCycleReg: Collaborative Cycle-consistency Method for Multi-modal Medical Image Registration.
Some code of this repository is borrowed from Voxelmorph and NeMAR.
If you are using conda, you can continue with
conda env create -f environment.yaml
-
Preparing the data for training like
├── /the/path/of/training/data/
├── img1_modality1.npy
├── img1_modality2.npy
├── img2_modality1.npy
├── img2_modality2.npy
......
-
Preparing the data for validating like
├── /the/path/of/validating/data/
├── img1_modality1.npy
├── img1_modality2.npy
├── img1_modality1_seg.npy
├── img1_modality2_seg.npy
├── img2_modality1.npy
├── img2_modality2.npy
├── img2_modality1_seg.npy
├── img2_modality2_seg.npy
......
-
Set the data path, GPU ID, batch size and other parameters in config.yaml.
-
Start training by running
python train.py
-
Tensorboard is supported, the log files are in /the/path/of/output/log/.
-
The weights are saved in /the/path/of/output/pth/.
If you have any problem, please feel free to report it in the issue, thank you!