Skip to content

Latest commit

 

History

History
77 lines (67 loc) · 3.33 KB

README.md

File metadata and controls

77 lines (67 loc) · 3.33 KB

RetinaNet Pytorch

1. References

2. Retina Net

2.1 Retina Net Architecture

# investigate backbone (input shape and output shape), I will update more backbone for experiments (next step: efficient net B0-B7)
cd flame/core/model/backbone
python resnet.py --version <resnet18 -> resnet110> --pretrained <if use pretrained weight>
python densenet.py --version <densenet121 -> densenet201> --pretrained <if use pretrained weight>
# investigate fpn (input shape and output shape)
cd flame/core/model/
python fpn.py
# investigate head (input shape and output shape)
cd flame/core/model/head
python efficient_head.py
python head.py
# investigate anchor generator (input shape and output shape)
cd flame/core/model/
python anchor_generator.py

2.2 Loss

  • Focal Loss: for computating loss of classification head.
  • Smooth L1: for computating loss of regression head.

2.3 mAP (mean Average Precision)

This is my technical note for mAP here borrowed heavily knowledge in awsome repo.

2.4 Visualization

3. Usage

  • Training: training with Focal loss on train_set and evaluating with Focal Loss on both train_set (again) and valid_set.
CUDA_VISIBLE_DEVICES=<gpu indice> python -m flame configs/PASCAL/pascal_training.yaml
  • Evaluation: evaluating with mAP metric and visualizing all predictions for test_set.
CUDA_VISIBLE_DEVICES=<gpu indice> python -m flame configs/PASCAL/pascal_testing.yaml

4. Experiments

4.1 VOC2007 and VOC2012

<In progress ...>

4.2 COCO

<In progress ...>

4.3 Labelme format Dataset

<In progress ...>