Project | Paper | Video | Video (in Chinese)
3D Pose Transfer with Correspondence Learning and Mesh Refinement
Chaoyue Song,
Jiacheng Wei,
Ruibo Li,
Fayao Liu,
Guosheng Lin
in NeurIPS, 2021.
21/11/2022
We release our latest work on unsupervised 3D pose transfer. Please check it here. In this paper, we present X-DualNet, an unsupervised deep learning framework to solve the 3D pose transfer problem in an end-to-end fashion. Through extensive experiments on human and animal meshes, we demonstrate X-DualNet achieves comparable performance as the state-of-the-art supervised approaches qualitatively and quantitatively and even outperforms some of them. The code of X-DualNet will be released soon at here.
- Clone this repo:
git clone https://github.com/ChaoyueSong/3d-corenet.git
cd 3d-corenet
- Install the dependencies. Our code has been tested on Python 3.6, PyTorch 1.8 (previous versions also work, plz install it according to your cuda version). We also need pymesh.
conda create -n 3d_corenet python=3.6
conda activate 3d_corenet
# install pytorch and pymesh
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge
conda install -c conda-forge pymesh2
- Clone the Synchronized-BatchNorm-PyTorch repo.
cd models/networks/
git clone https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
cp -rf Synchronized-BatchNorm-PyTorch/sync_batchnorm .
cd ../../
We use SMPL as the human mesh data, please download data here. And we generate our animal mesh data using SMAL, please download it here.
By default, it loads the latest checkpoint. It can be changed using --which_epoch
.
Download the pretrained model from pretrained model link and save them in checkpoints/human
. Then run the command
python test.py --dataset_mode human --dataroot [Your data path] --gpu_ids 0
The results will be saved in test_results/human/
by default. human_test_list
is randomly choosed for testing.
Download the pretrained model from pretrained model link and save them in checkpoints/animal
. Then run the command
python test.py --dataset_mode animal --dataroot [Your data path] --gpu_ids 0
The results will be saved in test_results/animal/
by default. animal_test_list
is randomly choosed for testing. For the calculation of CD and EMD, please check TMNet and MSN.
To train new models on human meshes, please run:
python train.py --dataset_mode human --dataroot [Your data path] --niter 100 --niter_decay 100 --batchSize 8 --gpu_ids 0,1
The output meshes in the training process will be saved in output/human/
.
To train new models on animal meshes, please run:
python train.py --dataset_mode animal --dataroot [Your data path] --niter 100 --niter_decay 100 --batchSize 8 --gpu_ids 0,1
The output meshes in the training process will be saved in output/animal/
.
Please change the batch size and gpu_ids as you desired.
If you need continue training from checkpoint, use --continue_train
.
If you use this code for your research, please cite the following work.
@inproceedings{song20213d,
title={3D Pose Transfer with Correspondence Learning and Mesh Refinement},
author={Song, Chaoyue and Wei, Jiacheng and Li, Ruibo and Liu, Fayao and Lin, Guosheng},
booktitle={Thirty-Fifth Conference on Neural Information Processing Systems},
year={2021}
}
This code is heavily based on CoCosNet. We rewrite the pix2pix architecture to ver2ver. We also use Optimal Transport code from FLOT, Data and Edge loss code from NPT, and Synchronized Batch Normalization.
We thank all authors for the wonderful code!