Skip to content

vahramtadevosyan/universal_style_transfer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Univeral Style Transfer

This repository is a PyTorch implementation of the paper Universal Style Transfer via Feature Transforms [NIPS 2017].

Installation

In order to work with the code in this repository, run the following commands in your virtual environment:

pip install --upgrade pip
pip install -r requirements.txt

The package versions are tested out on Python 3.9.14.

Training

For the Universal Style Transfer framework to work, one needs to train up to 5 decoders for the pre-trained VGG-19 encoders, cut off at up to 5 different depths. For the training, one needs to download the MS COCO training and validation datasets, and make sure that the model for the encoder at depth X has the following path: models/encoders/encoderX.pth.

Download data

In order to download the training and validation datasets, run the following commands:

cd data
sh download_data.sh
cd ..

Run the training script

In order to start the training, run the following command:

python train.py --config /path/to/config/file.json --depth X
  • The --config argument specifies the path to the configuration file. The default configuration file is at configs/default_config.json. In order to use a custom configuration, create a .json file and pass its path to this argument.
  • The --depth argument specifies the depth of the decoder to be trained. X has to be an integer from the set {1, 2, 3, 4, 5}.

Resume training from a checkpoint

In case you want to initialize the weights of the decoder from a specific path, run the following command:

python train.py --config /path/to/config/file.json --depth X --resume
  • If config["checkpoint_path"] contains a file of decoder weights, the weight initialization would start from those weights. Otherwise, the default model at models/decoders/decoderX.pth will be loaded.


Style Transfer

After the trainings are finished, make sure that the model for the decoder at depth X has the following path: models/decoders/decoderX.pth. You can run the style transfer framework in two modes, set by the level argument.

Single-Level Style Transfer

Single-level style transfer uses only the encoder and the decoder at the specified depth X. The inference in single-level mode is done via the following command:

python stylize.py --level single --depth X --strength 1.0 --content_dir /path/to/content/directory --style_dir /path/to/style/directory --output_dir /path/to/output/directory
  • The --depth argument specifies the depth of the single encoder-decoder model. X has to be an integer from the set {1, 2, 3, 4}.
  • The --strength argument specifies the strength of stylization. It has to be a floating-point number in the range [0, 1]. For single-level style transfer, it is recommended to have --strength 1.0, which is the default behavior of the framework.
  • The --content_dir, --style_dir, and --output_dir arguments specify the directories of content images, style images, and the stylized content images (style transfer results), respectively.

Multi-Level Style Transfer

Multi-level style transfer uses the encoder(s) and the decoder(s) from depth 1 up to the specified depth X. The inference in multi-level mode is done via the following command:

python stylize.py --level multi --depth X --strength 0.6 --content_dir /path/to/content/directory --style_dir /path/to/style/directory --output_dir /path/to/output/directory
  • The --depth argument specifies the depth of the largest encoder-decoder model to be used. X has to be an integer from the set {1, 2, 3, 4}.
  • The --strength argument specifies the strength of stylization. It has to be a floating-point number in the range [0, 1]. For multi-level style transfer with depth X, it is recommended to have --strength 0.2*X, which is the default behavior of the framework.
  • The --content_dir, --style_dir, and --output_dir arguments specify the directories of content images, style images, and the stylized content images (style transfer results), respectively.

Results

Single-Level Style Transfer

Depth 1
Depth 2
Depth 3
Depth 4


Multi-Level Style Transfer

Maximum depth 2
Maximum depth 3
Maximum depth 4


Citation

@inproceedings{WCT-NIPS-2017,
    author = {Li, Yijun and Fang, Chen and Yang, Jimei and Wang, Zhaowen and Lu, Xin and Yang, Ming-Hsuan},
    title = {Universal Style Transfer via Feature Transforms},
    booktitle = {Advances in Neural Information Processing Systems},
    year = {2017}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published