Skip to content

Commit

Permalink
Add markdown lint in pre-commit hook (open-mmlab#4255)
Browse files Browse the repository at this point in the history
* Add markdown lint in pre-commit hook

* add markdown lint
  • Loading branch information
xvjiarui authored Dec 9, 2020
1 parent 2fa9748 commit 83a9643
Show file tree
Hide file tree
Showing 62 changed files with 350 additions and 184 deletions.
26 changes: 21 additions & 5 deletions .github/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,17 +13,19 @@ All kinds of contributions are welcome, including but not limited to the followi
4. create a PR

Note

- If you plan to add some new features that involve large changes, it is encouraged to open an issue for discussion first.
- If you are the author of some papers and would like to include your method to mmdetection,
please let us know (open an issue or contact the maintainers). We will much appreciate your contribution.
- If you are the author of some papers and would like to include your method to mmdetection, please let us know (open an issue or contact the maintainers). We will much appreciate your contribution.
- For new features and new modules, unit tests are required to improve the code's robustness.

## Code style

### Python

We adopt [PEP8](https://www.python.org/dev/peps/pep-0008/) as the preferred code style.

We use the following tools for linting and formatting:

- [flake8](http://flake8.pycqa.org/en/latest/): linter
- [yapf](https://github.com/google/yapf): formatter
- [isort](https://github.com/timothycrosley/isort): sort imports
Expand All @@ -36,19 +38,33 @@ The config for a pre-commit hook is stored in [.pre-commit-config](../.pre-commi

After you clone the repository, you will need to install initialize pre-commit hook.

```
```shell
pip install -U pre-commit
```

From the repository folder
```

```shell
pre-commit install
```

After this on every commit check code linters and formatter will be enforced.
If you are facing issue when installing markdown lint, you may install ruby for markdown lint by following

```shell
# install rvm
curl -L https://get.rvm.io | bash -s -- --autolibs=read-fail
# set up environment
echo 'source $HOME/.bash_profile' >> ~/.bashrc
source ~/.profile
rvm autolibs disable
# install ruby
rvm install 2.7.1
```

After this on every commit check code linters and formatter will be enforced.

>Before you create a PR, make sure that your code lints and is formatted by yapf.
### C++ and CUDA

We follow the [Google C++ Style Guide](https://google.github.io/styleguide/cppguide.html).
9 changes: 7 additions & 2 deletions .github/ISSUE_TEMPLATE/error-report.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,17 +10,21 @@ assignees: ''
Thanks for your error report and we appreciate it a lot.

**Checklist**

1. I have searched related issues but cannot get the expected help.
2. The bug has not been fixed in the latest version.

**Describe the bug**
A clear and concise description of what the bug is.

**Reproduction**

1. What command or script did you run?
```

```none
A placeholder for the command.
```

2. Did you make any modifications on the code or config? Did you understand what you have modified?
3. What dataset did you use?

Expand All @@ -33,7 +37,8 @@ A placeholder for the command.

**Error traceback**
If applicable, paste the error trackback here.
```

```none
A placeholder for trackback.
```

Expand Down
19 changes: 14 additions & 5 deletions .github/ISSUE_TEMPLATE/reimplementation_questions.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,17 +10,20 @@ assignees: ''
**Notice**

There are several common situations in the reimplementation issues as below

1. Reimplement a model in the model zoo using the provided configs
2. Reimplement a model in the model zoo on other dataset (e.g., custom datasets)
3. Reimplement a custom model but all the components are implemented in MMDetection
4. Reimplement a custom model with new modules implemented by yourself

There are several things to do for different cases as below.

- For case 1 & 3, please follow the steps in the following sections thus we could help to quick identify the issue.
- For case 2 & 4, please understand that we are not able to do much help here because we usually do not know the full code and the users should be responsible to the code they write.
- One suggestion for case 2 & 4 is that the users should first check whether the bug lies in the self-implemented code or the original code. For example, users can first make sure that the same model runs well on supported datasets. If you still need help, please describe what you have done and what you obtain in the issue, and follow the steps in the following sections and try as clear as possible so that we can better help you.

**Checklist**

1. I have searched related issues but cannot get the expected help.
2. The issue has not been fixed in the latest version.

Expand All @@ -29,28 +32,34 @@ There are several things to do for different cases as below.
A clear and concise description of what the problem you meet and what have you done.

**Reproduction**

1. What command or script did you run?
```

```none
A placeholder for the command.
```

2. What config dir you run?
```

```none
A placeholder for the config.
```

3. Did you make any modifications on the code or config? Did you understand what you have modified?
4. What dataset did you use?

**Environment**

1. Please run `python mmdet/utils/collect_env.py` to collect necessary environment information and paste it here.
2. You may add addition that may be helpful for locating the problem, such as
- How you installed PyTorch [e.g., pip, conda, source]
- Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)
1. How you installed PyTorch [e.g., pip, conda, source]
2. Other environment variables that may be related (such as `$PATH`, `$LD_LIBRARY_PATH`, `$PYTHONPATH`, etc.)

**Results**

If applicable, paste the related results here, e.g., what you expect and what you get.
```

```none
A placeholder for results comparison
```

Expand Down
5 changes: 5 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,11 @@ repos:
args: ["--remove"]
- id: mixed-line-ending
args: ["--fix=lf"]
- repo: https://github.com/jumanjihouse/pre-commit-hooks
rev: 2.1.4
hooks:
- id: markdownlint
args: ["-r", "~MD002,~MD013,~MD024,~MD029,~MD033,~MD034,~MD036"]
- repo: https://github.com/myint/docformatter
rev: v1.3.1
hooks:
Expand Down
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ A comparison between v1.x and v2.0 codebases can be found in [compatibility.md](
Results and models are available in the [model zoo](docs/model_zoo.md).

Supported backbones:

- [x] ResNet
- [x] ResNeXt
- [x] VGG
Expand All @@ -60,6 +61,7 @@ Supported backbones:
- [x] ResNeSt

Supported methods:

- [x] [RPN](configs/rpn)
- [x] [Fast R-CNN](configs/fast_rcnn)
- [x] [Faster R-CNN](configs/faster_rcnn)
Expand Down
4 changes: 1 addition & 3 deletions configs/atss/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,8 @@
# Bridging the Gap Between Anchor-based and Anchor-free Detection via Adaptive Training Sample Selection


## Introduction

```
```latex
@article{zhang2019bridging,
title = {Bridging the Gap Between Anchor-based and Anchor-free Detection via Adaptive Training Sample Selection},
author = {Zhang, Shifeng and Chi, Cheng and Yao, Yongqiang and Lei, Zhen and Li, Stan Z.},
Expand All @@ -12,7 +11,6 @@
}
```


## Results and Models

| Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
Expand Down
3 changes: 2 additions & 1 deletion configs/cascade_rcnn/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
# Cascade R-CNN: High Quality Object Detection and Instance Segmentation

## Introduction
```

```latex
@article{Cai_2019,
title={Cascade R-CNN: High Quality Object Detection and Instance Segmentation},
ISSN={1939-3539},
Expand Down
4 changes: 3 additions & 1 deletion configs/centripetalnet/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
# CentripetalNet

## Introduction
```

```latex
@InProceedings{Dong_2020_CVPR,
author = {Dong, Zhiwei and Li, Guoxuan and Liao, Yue and Wang, Fei and Ren, Pengju and Qian, Chen},
title = {CentripetalNet: Pursuing High-Quality Keypoint Pairs for Object Detection},
Expand All @@ -18,5 +19,6 @@ year = {2020}
| HourglassNet-104 | [16 x 6](./centripetalnet_hourglass104_mstest_16x6_210e_coco.py) | 190/210 | 16.7 | 3.7 | 44.8 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/centripetalnet/centripetalnet_hourglass104_mstest_16x6_210e_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/centripetalnet/centripetalnet_hourglass104_mstest_16x6_210e_coco/centripetalnet_hourglass104_mstest_16x6_210e_coco_20200915_204804-3ccc61e5.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/centripetalnet/centripetalnet_hourglass104_mstest_16x6_210e_coco/centripetalnet_hourglass104_mstest_16x6_210e_coco_20200915_204804.log.json) |

Note:

- TTA setting is single-scale and `flip=True`.
- The model we released is the best checkpoint rather than the latest checkpoint (box AP 44.8 vs 44.6 in our experiment).
1 change: 0 additions & 1 deletion configs/cityscapes/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,6 @@
- A conversion [script](../../tools/convert_datasets/cityscapes.py) is provided to convert Cityscapes into COCO format. Please refer to [install.md](../../docs/install.md#prepare-datasets) for details.
- `CityscapesDataset` implemented three evaluation methods. `bbox` and `segm` are standard COCO bbox/mask AP. `cityscapes` is the cityscapes dataset official evaluation, which may be slightly higher than COCO.


### Faster R-CNN

| Backbone | Style | Lr schd | Scale | Mem (GB) | Inf time (fps) | box AP | Config | Download |
Expand Down
10 changes: 6 additions & 4 deletions configs/cornernet/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
# CornerNet

## Introduction
```

```latex
@inproceedings{law2018cornernet,
title={Cornernet: Detecting objects as paired keypoints},
author={Law, Hei and Deng, Jia},
Expand All @@ -21,9 +22,10 @@
| HourglassNet-104 | [32 x 3](./cornernet_hourglass104_mstest_32x3_210e_coco.py) | 180/210 | 9.5 | 3.9 | 40.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/cornernet/cornernet_hourglass104_mstest_32x3_210e_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_32x3_210e_coco/cornernet_hourglass104_mstest_32x3_210e_coco_20200819_203110-1efaea91.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/cornernet/cornernet_hourglass104_mstest_32x3_210e_coco/cornernet_hourglass104_mstest_32x3_210e_coco_20200819_203110.log.json) |

Note:

- TTA setting is single-scale and `flip=True`.
- Experiments with `images_per_gpu=6` are conducted on Tesla V100-SXM2-32GB, `images_per_gpu=3` are conducted on GeForce GTX 1080 Ti.
- Here are the descriptions of each experiment setting:
- 10 x 5: 10 GPUs with 5 images per gpu. This is the same setting as that reported in the original paper.
- 8 x 6: 8 GPUs with 6 images per gpu. The total batchsize is similar to paper and only need 1 node to train.
- 32 x 3: 32 GPUs with 3 images per gpu. The default setting for 1080TI and need 4 nodes to train.
- 10 x 5: 10 GPUs with 5 images per gpu. This is the same setting as that reported in the original paper.
- 8 x 6: 8 GPUs with 6 images per gpu. The total batchsize is similar to paper and only need 1 node to train.
- 32 x 3: 32 GPUs with 3 images per gpu. The default setting for 1080TI and need 4 nodes to train.
4 changes: 2 additions & 2 deletions configs/dcn/README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Deformable Convolutional Networks

# Introduction
## Introduction

```
```none
@inproceedings{dai2017deformable,
title={Deformable Convolutional Networks},
author={Dai, Jifeng and Qi, Haozhi and Xiong, Yuwen and Li, Yi and Zhang, Guodong and Hu, Han and Wei, Yichen},
Expand Down
3 changes: 2 additions & 1 deletion configs/deepfashion/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# DeepFashion

MMFashion(https://github.com/open-mmlab/mmfashion) develops "fashion parsing and segmentation" module
[MMFashion](https://github.com/open-mmlab/mmfashion) develops "fashion parsing and segmentation" module
based on the dataset
[DeepFashion-Inshop](https://drive.google.com/drive/folders/0B7EVK8r0v71pVDZFQXRsMDZCX1E?usp=sharing).
Its annotation follows COCO style.
Expand Down Expand Up @@ -38,6 +38,7 @@ After that you can train the Mask RCNN r50 on DeepFashion-In-shop dataset by lau
or creating your own config file.

## Model Zoo

| Backbone | Model type | Dataset | bbox detection Average Precision | segmentation Average Precision | Config | Download (Google) |
| :---------: | :----------: | :-----------------: | :--------------------------------: | :----------------------------: | :---------:| :-------------------------: |
| ResNet50 | Mask RCNN | DeepFashion-In-shop | 0.599 | 0.584 |[config](https://github.com/open-mmlab/mmdetection/blob/master/configs/deepfashion/mask_rcnn_r50_fpn_15e_deepfashion.py)| [model](https://drive.google.com/open?id=1q6zF7J6Gb-FFgM87oIORIt6uBozaXp5r) | [log](https://drive.google.com/file/d/1qTK4Dr4FFLa9fkdI6UVko408gkrfTRLP/view?usp=sharing) |
3 changes: 2 additions & 1 deletion configs/double_heads/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
# Rethinking Classification and Localization for Object Detection

## Introduction
```

```latex
@article{wu2019rethinking,
title={Rethinking Classification and Localization for Object Detection},
author={Yue Wu and Yinpeng Chen and Lu Yuan and Zicheng Liu and Lijuan Wang and Hongzhi Li and Yun Fu},
Expand Down
3 changes: 1 addition & 2 deletions configs/empirical_attention/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Introduction

```
```latex
@article{zhu2019empirical,
title={An Empirical Study of Spatial Attention Mechanisms in Deep Networks},
author={Zhu, Xizhou and Cheng, Dazhi and Zhang, Zheng and Lin, Stephen and Dai, Jifeng},
Expand All @@ -11,7 +11,6 @@
}
```


## Results and Models

| Backbone | Attention Component | DCN | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
Expand Down
3 changes: 2 additions & 1 deletion configs/fast_rcnn/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
# Fast R-CNN

## Introduction
```

```latex
@inproceedings{girshick2015fast,
title={Fast r-cnn},
author={Girshick, Ross},
Expand Down
5 changes: 4 additions & 1 deletion configs/faster_rcnn/README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,8 @@
# Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks

## Introduction
```

```latex
@article{Ren_2017,
title={Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
Expand Down Expand Up @@ -29,6 +30,7 @@
| X-101-64x4d-FPN | pytorch | 2x | - | - | 41.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_x101_64x4d_fpn_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_x101_64x4d_fpn_2x_coco/faster_rcnn_x101_64x4d_fpn_2x_coco_20200512_161033-5961fa95.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_x101_64x4d_fpn_2x_coco/faster_rcnn_x101_64x4d_fpn_2x_coco_20200512_161033.log.json) |

## Different regression loss

We trained with R-50-FPN pytorch style backbone for 1x schedule.

| Backbone | Loss type | Mem (GB) | Inf time (fps) | box AP | Config | Download |
Expand All @@ -39,6 +41,7 @@ We trained with R-50-FPN pytorch style backbone for 1x schedule.
| R-50-FPN | BoundedIoULoss | | | 37.4 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_bounded_iou_1x_coco-98ad993b.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_1x_coco/faster_rcnn_r50_fpn_bounded_iou_1x_coco_20200505_160738.log.json) |

## Pre-trained Models

We also train some models with longer schedules and multi-scale training. The users could finetune them for downstream tasks.

| Backbone | Style | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
Expand Down
4 changes: 2 additions & 2 deletions configs/fcos/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

## Introduction

```
```latex
@article{tian2019fcos,
title={FCOS: Fully Convolutional One-Stage Object Detection},
author={Tian, Zhi and Shen, Chunhua and Chen, Hao and He, Tong},
Expand All @@ -23,14 +23,14 @@
| R-101 | caffe | Y | N | N | N | 1x | 10.2 | 17.3 | 39.2 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fcos/fcos_r101_caffe_fpn_gn-head_4x4_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/fcos/fcos_r101_caffe_fpn_gn-head_4x4_1x_coco/fcos_r101_caffe_fpn_gn_1x_4gpu_20200218-13e2cc55.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/fcos/fcos_r101_caffe_fpn_gn-head_4x4_1x_coco/20200130_004231.log.json) |
| R-101 | caffe | Y | N | N | N | 2x | - | - | 39.1 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fcos/fcos_r101_caffe_fpn_gn-head_4x4_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/fcos/fcos_r101_caffe_fpn_gn-head_4x4_2x_coco/fcos_r101_caffe_fpn_gn_2x_4gpu_20200218-d2261033.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/fcos/fcos_r101_caffe_fpn_gn-head_4x4_2x_coco/20200130_004231.log.json) |


| Backbone | Style | GN | MS train | Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
|:---------:|:-------:|:-------:|:--------:|:-------:|:--------:|:--------------:|:------:|:------:|:--------:|
| R-50 | caffe | Y | Y | 2x | 6.5 | 22.9 | 38.7 | | |
| R-101 | caffe | Y | Y | 2x | 10.2 | 17.3 | 40.9 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fcos/fcos_r101_caffe_fpn_gn-head_mstrain_640-800_4x4_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/fcos/fcos_r101_caffe_fpn_gn-head_mstrain_640-800_4x4_2x_coco/fcos_mstrain_640_800_r101_caffe_fpn_gn_2x_4gpu_20200218-d8a4f4cf.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/fcos/fcos_r101_caffe_fpn_gn-head_mstrain_640-800_4x4_2x_coco/20200130_004232.log.json) |
| X-101 | pytorch | Y | Y | 2x | 10.0 | 9.3 | 42.5 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/fcos/fcos_x101_64x4d_fpn_gn-head_mstrain_640-800_4x2_2x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/fcos/fcos_x101_64x4d_fpn_gn-head_mstrain_640-800_4x2_2x_coco/fcos_x101_64x4d_fpn_gn-head_mstrain_640-800_4x2_2x_coco_20200229-11f8c079.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/fcos/fcos_x101_64x4d_fpn_gn-head_mstrain_640-800_4x2_2x_coco/fcos_x101_64x4d_fpn_gn-head_mstrain_640-800_4x2_2x_coco_20200229_222104.log.json) |

**Notes:**

- To be consistent with the author's implementation, we use 4 GPUs with 4 images/GPU for R-50 and R-101 models, and 8 GPUs with 2 image/GPU for X-101 models.
- The X-101 backbone is X-101-64x4d.
- Tricks means setting `norm_on_bbox`, `centerness_on_reg`, `center_sampling` as `True`.
Expand Down
5 changes: 4 additions & 1 deletion configs/foveabox/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ FoveaBox is an accurate, flexible and completely anchor-free object detection sy
Different from previous anchor-based methods, FoveaBox directly learns the object existing possibility and the bounding box coordinates without anchor reference. This is achieved by: (a) predicting category-sensitive semantic maps for the object existing possibility, and (b) producing category-agnostic bounding box for each position that potentially contains an object.

## Main Results

### Results on R50/101-FPN

| Backbone | Style | align | ms-train| Lr schd | Mem (GB) | Inf time (fps) | box AP | Config | Download |
Expand All @@ -25,8 +26,10 @@ Different from previous anchor-based methods, FoveaBox directly learns the objec
Any pull requests or issues are welcome.

## Citations

Please consider citing our paper in your publications if the project helps your research. BibTeX reference is as follows.
```

```latex
@article{kong2019foveabox,
title={FoveaBox: Beyond Anchor-based Object Detector},
author={Kong, Tao and Sun, Fuchun and Liu, Huaping and Jiang, Yuning and Shi, Jianbo},
Expand Down
Loading

0 comments on commit 83a9643

Please sign in to comment.