RePOSE: Fast 6D Object Pose Refinement via Deep Texture Rendering (ICCV2021) [Link]
We present RePOSE, a fast iterative refinement method for 6D object pose estimation. Prior methods perform refinement by feeding zoomed-in input and rendered RGB images into a CNN and directly regressing an update of a refined pose. Their runtime is slow due to the computational cost of CNN, which is especially prominent in multiple-object pose refinement. To overcome this problem, RePOSE leverages image rendering for fast feature extraction using a 3D model with a learnable texture. We call this deep texture rendering, which uses a shallow multi-layer perceptron to directly regress a view-invariant image representation of an object. Furthermore, we utilize differentiable Levenberg-Marquardt (LM) optimization to refine a pose fast and accurately by minimizing the feature-metric error between the input and rendered image representations without the need of zooming in. These image representations are trained such that differentiable LM optimization converges within few iterations. Consequently, RePOSE runs at 92 FPS and achieves state-of-the-art accuracy of 51.6% on the Occlusion LineMOD dataset - a 4.1% absolute improvement over the prior art, and comparable result on the YCB-Video dataset with a much faster runtime.
- Python >= 3.6
- Pytorch == 1.9.0
- Torchvision == 0.10.0
- CUDA == 10.1
- LineMOD Dataset
- LineMOD Orig Dataset
- Occlusion LineMOD Dataset
- 3D Models
- Pretrained Models
- Cached Files for LineMOD
- Cached Files for Occlusion LineMOD
- Set up the python environment:
$ pip install torch==1.9.0 torchvision==0.10.0 $ pip install Cython==0.29.17 $ sudo apt-get install libglfw3-dev libglfw3 $ pip install -r requirements.txt # Install Differentiable Renderer $ cd renderer $ python3 setup.py install
- Compile cuda extensions under
lib/csrc
:ROOT=/path/to/RePOSE cd $ROOT/lib/csrc export CUDA_HOME="/usr/local/cuda-10.1" cd ../ransac_voting python setup.py build_ext --inplace cd ../camera_jacobian python setup.py build_ext --inplace cd ../nn python setup.py build_ext --inplace cd ../fps python setup.py
- Set up datasets:
$ ROOT=/path/to/RePOSE $ cd $ROOT/data $ ln -s /path/to/linemod linemod $ ln -s /path/to/linemod_orig linemod_orig $ ln -s /path/to/occlusion_linemod occlusion_linemod $ cd $ROOT/data/model/ $ unzip pretrained_models.zip $ cd $ROOT/cache/LinemodTest $ unzip ape.zip benchvise.zip .... phone.zip $ cd $ROOT/cache/LinemodOccTest $ unzip ape.zip can.zip .... holepuncher.zip
We have 13 categories (ape, benchvise, cam, can, cat, driller, duck, eggbox, glue, holepuncher, iron, lamp, phone) on the LineMOD dataset and 8 categories (ape, can, cat, driller, duck, eggbox, glue, holepuncher) on the Occlusion LineMOD dataset.
Please choose the one category you like (replace ape
with another category) and perform testing.
- Generate the annotation data:
python run.py --type linemod cls_type ape model ape
- Test:
# Test on the LineMOD dataset $ python run.py --type evaluate --cfg_file configs/linemod.yaml cls_type ape model ape # Test on the Occlusion LineMOD dataset $ python run.py --type evaluate --cfg_file configs/linemod.yaml test.dataset LinemodOccTest cls_type ape model ape
- Generate the annotation data:
python run.py --type linemod cls_type ape model ape
- Visualize:
# Visualize the results of the LineMOD dataset python run.py --type visualize --cfg_file configs/linemod.yaml cls_type ape model ape # Visualize the results of the Occlusion LineMOD dataset python run.py --type visualize --cfg_file configs/linemod.yaml test.dataset LinemodOccTest cls_type ape model ape
@InProceedings{Iwase_2021_ICCV,
author = {Iwase, Shun and Liu, Xingyu and Khirodkar, Rawal and Yokota, Rio and Kitani, Kris M.},
title = {RePOSE: Fast 6D Object Pose Refinement via Deep Texture Rendering},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2021},
pages = {3303-3312}
}
Our code is largely based on clean-pvnet and our rendering code is based on neural_renderer. Thank you so much for making these codes publicly available!
If you have any questions about the paper and implementation, please feel free to email me ([email protected])! Thank you!