Code for 'Unsupervised Surgical Instrument Segmentation via Anchor Generation and Semantic Diffusion' (MICCAI 2020).
Paper and Video Demo.
- Recommended Environment: Python 3.5, Cuda 10.0, PyTorch 1.3.1
- Install dependencies:
pip3 install -r requirements.txt
.
- Download our data for EndoVis 2017 from Baidu Yun (PIN:m0o7) or Google Drive.
- Unzip the file and put into the current directory.
- The data includes following sub-directories:
image
: Raw images (Left frames) from the EndoVis 2017 dataset
ground_truth
: Ground truth of binary surgical instrument segmentation.
cues
: Hand-designed coarse cues for surgical instruments.
anchors
: Anchors generated by fusing cues.
prediction
: Final probability maps output by our trained model (Single stage setting).
Simply run python3 main.py --config config-endovis17-SS-full.json
.
This config file config-endovis17-SS-full.json
is for the full model in the single stage setting (SS).
For other experimental settings in our paper, please accordingly modify the config file and the train_train_datadict
, train_test_datadict
, test_datadict
in main.py
if necessary.
Results will be saved in a folder named with the naming
in the config file.
This output folder will include following sub-directories:
logs
: A Tensorboard logging file and an numpy logging file.
models
: Trained models.
pos_prob
: Probability maps for instruments.
pos_mask
: Segmentation masks for instruments.
neg_prob
: Probability maps for non-instruments.
neg_mask
: Segmentation masks for non-instruments.
TO DO.
MIT