This repository is based on [Dabnet]
At the time of the challenge Dabnet was most accurate open-sourced lightweight network for semantic segmentation. It was still too big to satisfy competition requirements of under 250k parameters, so number of channels was reduced. Shrinking of the network meant pretrained weights of the original Dabnet is of no use, so training starts with random weights.
Leaderboard IoU | Model File | |
---|---|---|
Cross-Entropy loss | 0.9504 | - |
Dice loss | 0.9493 | - |
Focal loss | 0.9516 | - |
Focal and Dice losses combo | 0.9519 | GoogleDrive |
Place images for testing into data/test/images folder. Model file "Eye_segnet_fd_g1_e40_lr_0.010_max_dice.pth.tar" should be placed into data folder. Then run
./submission.sh
Adding "--device -1" to the command inside the script would run the script on CPU. Zipped file for submission should be in "data" folder after running inference. After placing "train" and "validation" folders inside "data" allows to train the model. Both folders should have "labels" and "images" inside them as in original dataset.
- Annotations were provided only for a small portion of the dataset, so given high pixel-wise accuracy on this rather simple task creating masks by predicting labels for unannotated data would have expanded the dataset massively.
- Knowledge Distillation
- Stronger regularization along with longer training schedule
- Refactor variable parsing with python-fire
- Add Tensorboard logging
- Add Docker support
- Change constants to variables