Official PyTorch implementation of "Stochastic Conditional Diffusion Models for Robust Semantic Image Synthesis" (ICML 2024).
Juyeon Ko*, Inho Kong*, Dogyun Park, Hyunwoo J. Kim†.
Department of Computer Science and Engineering, Korea University
-
Clone repository
git clone https://github.com/mlvlab/SCDM.git cd SCDM
-
Setup conda environment
conda env create -f environment.yaml conda activate scdm
-
CelebAMask-HQ can be downloaded from CelebAMask-HQ. The dataset should be structured as below:
CelebAMask/ ├─ train/ │ ├─ images/ │ │ ├─ 0.jpg │ │ ├─ ... │ │ ├─ 27999.jpg │ ├─ labels/ │ │ ├─ 0.png │ │ ├─ ... │ │ ├─ 27999.png ├─ test/ │ ├─ images/ │ │ ├─ 28000.jpg │ │ ├─ ... │ │ ├─ 29999.jpg │ ├─ labels/ │ │ ├─ 28000.png │ │ ├─ ... │ │ ├─ 29999.png
-
ADE20K can be downloaded from MIT Scene Parsing Benchmark, and we followed SPADE for preparation. The dataset should be structured as below:
ADE20K/ ├─ ADEChallengeData2016/ │ │ ├─ images/ │ │ │ ├─ training/ │ │ │ │ ├─ ADE_train_00000001.jpg │ │ │ │ ├─ ... │ │ │ ├─ validation/ │ │ │ │ ├─ ADE_val_00000001.jpg │ │ │ │ ├─ ... │ │ ├─ annotations/ │ │ │ ├─ training/ │ │ │ │ ├─ ADE_train_00000001.png │ │ │ │ ├─ ... │ │ │ ├─ validation/ │ │ │ │ ├─ ADE_val_00000001.png │ │ │ │ ├─ ...
-
COCO-STUFF can be downloaded from cocostuff, and we followed SPADE for preparation. The dataset should be structured as below:
coco/ ├─ train_img/ │ ├─ 000000000009.jpg │ ├─ ... ├─ train_label/ │ ├─ 000000000009.png │ ├─ ... ├─ train_inst/ │ ├─ 000000000009.png │ ├─ ... ├─ val_img/ │ ├─ 000000000139.jpg │ ├─ ... ├─ val_label/ │ ├─ 000000000139.png │ ├─ ... ├─ val_inst/ │ ├─ 000000000139.png │ ├─ ...
Our noisy SIS dataset for three benchmark settings (DS, Edge, and Random) based on ADE20K is available at Google Drive.
You can also generate the same dataset by running Python codes at image_process/
.
You can set CUDA visible devices by VISIBLE_DEVICES=${GPU_ID}
. (e.g., VISIBLE_DEVICES=0,1,2,3
)
-
Run
sh scripts/train.sh
-
For more details, please refer to
scripts/train.sh
. -
Pretrained models are available at Google Drive.
-
Run
sh scripts/sample.sh
-
For more details, please refer to
scripts/sample.sh
. -
Our samples are available at Google Drive.
-
FID (fidelity)
The code is based on OASIS.
python evaluations/fid/tests_with_FID.py --path {SAMPLE_PATH} {GT_IMAGE_PATH} -b {BATCH_SIZE} --gpu {GPU_ID}
-
LPIPS (diversity)
You should generate 10 sets of samples, and make
lpips_list.txt
withevaluations/lpips/make_lpips_list.py
. The code is based on stargan-v2.python evaluations/lpips/lpips.py --root_path results/ade20k --test_list lpips_list.txt --batch_size 10
-
mIoU (correspondence)
-
CelebAMask-HQ: U-Net. Clone the repo and set up environments from imaginaire, and add
evaluation/miou/test_celeba.py
toimaginaire/
. Check outevaluation/miou/celeba_config.yaml
for the config file and fix the path accordingly.cd imaginaire python test_celeba.py
-
ADE20K: Vit-Adapter-S with UperNet. Clone the repo and set up environments from Vit-Adapter.
cd ViT-Adapter/segmentation bash dist_test.sh \ configs/ade20k/upernet_deit_adapter_small_512_160k_ade20k.py \ pretrained/upernet_deit_adapter_small_512_160k_ade20k.pth \ 1 \ # NUM_GPUS --eval mIoU \ --img_dir {SAMPLE_DIR} \ --ann_dir {LABEL_DIR} \ --root_dir {SAMPLE_ROOT_DIR}
-
COCO-STUFF: DeepLabV2. Clone the repo and set up environments from imaginaire, and add
evaluation/miou/test_coco.py
toimaginaire/
. Check outevaluation/miou/coco_config.yaml
for the config file and fix the path accordingly.cd imaginaire python test_coco.py
-
This repository is built upon guided-diffusion and SDM.
If you use this work, please cite as:
@InProceedings{pmlr-v235-ko24e,
title ={Stochastic Conditional Diffusion Models for Robust Semantic Image Synthesis},
author ={Ko, Juyeon and Kong, Inho and Park, Dogyun and Kim, Hyunwoo J.},
booktitle ={Proceedings of the 41st International Conference on Machine Learning},
year ={2024}
}
Feel free to contact us if you need help or explanations!
- Juyeon Ko ([email protected])
- Inho Kong ([email protected])