Skip to content

programmer1401/SSD_resnet_pytorch

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

60 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SSD: Single Shot MultiBox Object Detector, in PyTorch

Table of Contents

Introduction

This is the SSD model based on project by Max DeGroot. I corrected some bugs in the code and successfully run the code on GPUs at Google Cloud.

SSD (Single Shot MultiBox Object Detector) is able to detect objects in an image with bounding boxes. The method is faster than faster-RCNN and mask-RCNN and still yield a good accuracy.

Installation

  • Install PyTorch by selecting your environment on the website and running the appropriate command.
  • Clone this repository.
    • Note: We currently only support Python 3+.
  • Then download the dataset by following the instructions below.
  • We support Visdom for real-time loss visualization during training!
    • To use Visdom in the browser:
    # First install Python server and client
    pip install visdom
    # Start the server (probably in a screen or tmux)
    python -m visdom.server
    • Then (during training) navigate to http://localhost:8097/ (see the Train section below for training details).
  • Note: For training, we currently support VOC, and aim to add and COCO ImageNet support in the future.

Datasets

To make things easy, we provide bash scripts to handle the dataset downloads and setup for you. We also provide simple dataset loaders that inherit torch.utils.data.Dataset, making them fully compatible with the torchvision.datasets API.

VOC Dataset

PASCAL VOC: Visual Object Classes

Download VOC2007 trainval & test
git clone https://github.com/yczhang1017/SSD_resnet_pytorch.git
# navigate to the home directory of SSD model, dataset will be downloaded into data folder
cd SSD_resnet_pytorch
# specify a directory for dataset to be downloaded into, else default is ~/data/
sh data/scripts/VOC2007.sh # <directory>
Download VOC2012 trainval
# specify a directory for dataset to be downloaded into, else default is ~/data/
sh data/scripts/VOC2012.sh # <directory>

COCO(not fully implemented yet)

Microsoft COCO: Common Objects in Context

Download COCO 2014
# specify a directory for dataset to be downloaded into, else default is ~/data/
sh data/scripts/COCO2014.sh

Training SSD

cd weights
wget https://s3.amazonaws.com/amdegroot-models/vgg16_reducedfc.pth
#adjust the keys in the weights file to fit for current model
python3 vggweights.py
cd ..
  • To train SSD using the train script simply specify the parameters listed in train.py as a flag or manually change them.
#use vgg 
python3 train.py 
#If use resNet 
python3 train.py --model 'resnet' --basenet 'resnet50.pth' 
#if you don't want the training to stop after you log out
nohup python3 -u train.py --model 'resnet' --basenet 'resnet50.pth' > r1.log </dev/null 2>&1
  • Note:
    • For training, an NVIDIA GPU is strongly recommended for speed. It takes about two days to iterate over 120000x24 images for using Tesla K80 GPU. resNet50 takes a little bit longer than VGG16. I guess the time would be within one day, if you use Tesla P4 or P100.
    • For instructions on Visdom usage/installation, see the Installation section.
    • You can pick-up training from a checkpoint by specifying the path as one of the training parameters (again, see train.py for options)

Test

Use a pre-trained SSD network for detection

Download a pre-trained network

cd weights
wget https://s3.amazonaws.com/amdegroot-models/ssd300_mAP_77.43_v2.pth
#adjust the keys in the weights file to fit for current model
python3 ssdweights.py      

Test and evaluate mean AP (average precision)

  • To test a trained network:
#use vgg 
python3 test.py
#If use resNet
python3 test.py --model 'resnet' --trained_model 'weights/ssd300_resnet.pth'

Currently, we got mAP 86% for VGG16 and %67 for resNet50.

Display images

#use vgg 
python3 demo.py

The output images are shown in demo folder test example 1 test example 2

References

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 96.3%
  • Shell 3.7%