This repository contains the code for the paper "StyLess: Boosting the Transferability of Adversarial Examples" (CVPR 2023). You can find the paper on arXiv. A video presentation is also available on YouTube, which you can watch here. If you want to take a look at the slides or poster, you can find them here.
The codes have been tested on RTX 3090 graphics cards using Python 3.8.5, PyTorch 1.7.1, CUDA 11.1, and Torchvision 0.8.2.
Perform MTDSI-StyLess attack on test samples as follows:
# MTDSI-StyLess, ResNet-50, test_samples
python styless_attack.py --mi --ti --di --si --styless
The results will be saved in the folder exp/test_samples/resnet50/ifgsm_mi_ti_di_si_styless/
. By default, the ten images in the folder data/test_samples/
are used as test samples, and their ground truth labels are the image names. The test samples were randomly sampled from the ImageNet validation set.
Next, evaluate the transferability of adversarial examples generated by the MTDSI-StyLess attack as follows:
# test_samples
python eval.py --save_dir exp/test_samples/resnet50/ifgsm_mi_ti_di_si_styless/
The results will be saved as a CSV file in the folder exp/test_samples/resnet50/ifgsm_mi_ti_di_si_styless/
.
Folder run_scripts/
contains some scripts for running different attack methods on different surrogate models. For example, perform three attacks on ResNet-50 as follows:
# ResNet-50; test_samples; MTD-StyLess, MTDSI-StyLess, and MTDSAI-StyLess
bash run_scripts/run_resnet50_test.sh
If you want to conduct experiments on the complete dataset, please download the dataset from this Google Drive link. Decompress the downloaded file to obtain a folder named imagenet_vt
, then put it into the folder data/
.
The folder imagenet_vt
contains 1000 images, and the image names serve as their ground truth labels. These images were selected randomly from the ImageNet validation set by the authors of this work. Additional information about the dataset can be found in the CSV files in the folder data/
.
Perform MTDSI-StyLess attack on the complete dataset as follows:
# MTDSI-StyLess, ResNet-50, complete dataset
python styless_attack.py --mi --ti --di --si --styless --img_dir 'data/imagenet_vt'
The results will be saved in the folder exp/imagenet_vt/resnet50/ifgsm_mi_ti_di_si_styless/
. This process will take approximately three hours.
Then, evaluate the generated adversarial examples as follows:
# complete dataset
python eval.py --save_dir exp/imagenet_vt/resnet50/ifgsm_mi_ti_di_si_styless/
The results will be saved as a CSV file in the exp/imagenet_vt/resnet50/ifgsm_mi_ti_di_si_styless/
. To use different surrogate models or attack methods, refer to the scripts in the folder run_scripts/
, such as run_resnet50.sh
, run_wrn101_test.sh
, and run_densenet121_test.sh
.
This repository benefits from the codes of ILA and SI-FGSM. We thank the authors for sharing their codes.
If you find this repository useful, please cite our paper:
@inproceedings{liang2023styless,
title={StyLess: Boosting the Transferability of Adversarial Examples},
author={Liang, Kaisheng and Xiao, Bin},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={8163--8172},
year={2023}
}