Wenjing Bian, Zirui Wang, Kejie Li, Victor Adrian Prisacariu. BMVC 2021.
Active Vision Lab, University of Oxford.
First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.
You can create an anaconda environment called rayonet
using
conda env create -f environment.yaml
conda activate rayonet
Next, compile the extension modules. You can do this via
python setup.py build_ext --inplace
You can download the model trained on 13 ShapeNet categories from this link
and place it under the directory demo
.
You can now test our code on the provided input images in the demo
folder.
To this end, simply run
python generate.py configs/demo.yaml
This script should create a folder demo/generation
where the output meshes are stored.
The script will copy the inputs into the demo/generation/inputs
folder and creates the meshes in the demo/generation/meshes
folder.
Moreover, the script creates a demo/generation/vis
folder where both inputs and outputs are copied together.
The dataset can be built according to the following steps:
- download the ShapeNet dataset v1 and put into
data/external/ShapeNet
- generate watertight meshes by following the instructions in the
external/mesh-fusion
folder. - download the Preprocessed ShapeNet by Occupancy Networks and put into
data/ShapeNet
.
You are now ready to build the dataset:
cd scripts
bash dataset_shapenet/build.sh
This command will build the dataset containing ground truth occupancies in data/ShapeNet.build/ray_occ
.
When you have installed all binary dependencies and obtained the preprocessed data, you are ready to train new models from scratch.
To generate meshes using a trained model, use
python generate.py CONFIG.yaml
where you replace CONFIG.yaml
with the correct config file.
The easiest way is to use a pretrained model.
You can download the model trained on 13 ShapeNet categories from this link
and place it under the directory rayonet
.
The model trained on 3 classes (airplane, car, and chair) can be downloaded from this link.
For evaluation of the models, we provide two scripts: eval.py
and eval_meshes.py
.
The main evaluation script is eval_meshes.py
.
You can run it using
python eval_meshes.py CONFIG.yaml
The script takes the meshes generated in the previous step and evaluates them using a standardized protocol.
The output will be written to .pkl
/.csv
files in the corresponding generation folder which can be processed using pandas.
For a quick evaluation, you can also run
python eval.py CONFIG.yaml
This script will run a fast method specific evaluation to obtain some basic quantities that can be easily computed without extracting the meshes. This evaluation will also be conducted automatically on the validation set during training.
All results reported in the paper were obtained using the eval_meshes.py
script.
To train Ray-ONet from scratch, run
python train.py CONFIG.yaml
where you replace CONFIG.yaml
with the name of the configuration file you want to use.
You can monitor on http://localhost:6006 the training process using tensorboard:
cd OUTPUT_DIR
tensorboard --logdir ./logs --port 6006
where you replace OUTPUT_DIR
with the respective output directory.
For available training options, please take a look at configs/default.yaml
.
We thank Theo Costain for helpful discussions and comments. We thank Stefan Popov for providing the code for CoReNet and guidance on training. Wenjing Bian is supported by China Scholarship Council (CSC).
Our code is built on Occupancy Networks. We thank the excellent code they provide.
@inproceedings{bian2021rayonet,
title={Ray-ONet: Efficient 3D Reconstruction From A Single RGB Image},
author={Wenjing Bian and Zirui Wang and Kejie Li and Victor Adrian Prisacariu},
booktitle={BMVC},
year={2021}
}