Skip to content

Latest commit

 

History

History
executable file
·
93 lines (85 loc) · 3.75 KB

dataset.md

File metadata and controls

executable file
·
93 lines (85 loc) · 3.75 KB

Dataset preparation

Directory structure

1: You can download the parsed data from google drive. Make sure you have signed the license agreement with the dataset publisher.
Please note that AGORA and Relative_human are only used for training and evaluation BEV. They are not essential for training ROMP.
Please follow the directory structure to organize them.

|-- dataset
|   |-- h36m
|   |   |-- images
|   |   |-- annots.npz
|   |   |-- cluster_results...
|   |-- mpi-inf-3dhp
|   |   |-- images
|   |   |-- annots.npz
|   |   |-- cluster_results...
|   |-- MuCo
|   |   |-- augmented_set
|   |   |-- annots_augmented.npz
|   |-- coco
|   |   |-- images
|   |   |   |-- train2014
|   |   |   |-- val2014
|   |   |   |-- test2014
|   |   |-- annots_train2014.npz
|   |   |-- annots_val2014.npz
|   |-- mpii
|   |   |-- images
|   |   |-- annot
|   |   |-- eft_annots.npz
|   |-- lsp
|   |   |-- hr-lspet
|   |   |   |-- eft_annots.npz
|   |-- crowdpose
|   |   |-- images
|   |   |-- annots_train.npz
|   |   |-- annots_val.npz
|   |   |-- annots_test.npz
|   |-- 3DPW
|   |   |-- imageFiles
|   |   |-- sequenceFiles
|   |   |-- vibe_db
|   |   |-- annots.npz
|   |-- cmu_panoptic
|   |   |-- images
|   |   |-- annots.npz
|   |-- AGORA
|   |   |-- image_vertex_train
|   |   |-- image_vertex_validation
|   |   |-- train
|   |   |-- validation
|   |   |-- test
|   |   |-- annots_train.npz
|   |   |-- annots_validation.npz
|   |-- Relative_human
|   |   |-- images
|   |   |-- train_annots.npz
|   |   |-- val_annots.npz
|   |   |-- test_annots.npz

If you meet 'Download limit' problem from google drive, you can make a copy of the file to your personal google drive account to avoid this.

2: Download the images from the official websites, COCO 2014 images, MPII, CrowdPose, 3DPW. Please rename the image folder to 'images'.
For AGORA, we use the 1280x720 images.
For Relative_human, please refer to this website.

(Optional) 3. If you download the original videos from the official website of Human3.6M, please extract the images via:

python ROMP/romp/lib/dataset/preprocess/h36m_extract_frames.py h36m_extract_frames.py path/to/h36m_video_folder path/to/image_save_folder
# e.g. if you have archives/S1/Videos/Directions 1.54138969.mp4, then run
python h36m_extract_frames.py archives images

Finally, pleaset set the dataset root path:
If you put all datasets in one folder, then you just need to change dataset_rootdir config to the path of your dataset folder, like:

dataset_group.add_argument('--dataset_rootdir',type=str, default='/path/to/your/dataset/folder', help= 'root dir of all datasets')

If you put different dataset at different path, then you have to set them separately. For instance, to set the path of Human3.6M dataset, please change this line to the path where you put Human3.6M, like

self.data_folder = /path/to/your/h36m/

Test the data loading

We can test the data loading of a datasets, like Human3.6M via

cd ROMP
python -m romp.lib.dataset.h36m --configs_yml='configs/v6.yml'

Annotations will be drawed on the input image. The test results will be saved in ROMP/test/.