This repository contains the accompanying code for GAN Inversion for Out-of-Range Images with Geometric Transformations, ICCV 2021
- Ubuntu 18.04 or higher
- CUDA 10.0 or higher
- pytorch 1.6 or higher
- python 3.7 or higher
pip install -r requirements.txt
- Change directory into BDInvert.
cd BDInvert
- Train base code encoder.
python train_basecode_encoder.py
--model_name
: You can change backbone StyleGAN model, which located at model zoo.--basecode_spatial_size
: You can change spatial resolution of basecode.
- Find pnorm parameters.
python pca_p_space.py
Download and unzip under BDInvert/pretrained_models/
.
Encoder Pretrained Models | Basc Code Spatial Size |
---|---|
StyleGAN2 pretrained on FFHQ 1024, 16x16 | 16x16 |
- Default test setting use StyleGAN2 pretrained on FFHQ1024 and use basecode spatial size as 16x16.
- Change directory into BDInvert.
cd BDInvert
- Make image list.
python make_list.py --image_folder ./test_img
- Embed images into StyleGAN's latent codes.
python invert.py --encoder_pt_path {encoder_pt_path}
--image_list
: Inversion target image list generated from above step 2. Default is ./test_img/test.list--weight_pnorm_term
: As recently well known, there is a trade-off between editing quality and reconstruction quality. This argument controls this trade-off.
- Edit embedded results.
python edit.py {inversion directory}
--edit_direction
: You can change edit direction, which located at BDInvert/editings/interfacegan_directions
- We changed the detail code regularization method from hard clipping in P-norm+ space to L2 norm regularization, following the update of the original paper.
- Due to this change, new hyperparameter,
weight_pnorm_term
, has been added.
This software is being made available under the terms in the LICENSE file.
Any exemptions to these terms requires a license from the Pohang University of Science and Technology.
NOTE
- Our implementation is heavily based on the "GenForce Library".
- Interface GAN editing vectors are from "encoder4editing".