This is the official repository for Worst-case forget set on class-wise unlearning. The code structure of this project is adapted from the DP4TL codebase.
You can install the necessary Python packages with:
pip install -r requirements.txt
We remark that to accelerate model training, this code repository is built based on FFCV and we refer its installation instructions to its official website. In this work, we build our argument system via fastargs, and we provide a revised version here. The installation of the latest fastargs is automatically handled by the command above.
For ImageNet, we provide the preprocessed data (.beton
) in this link. Please download the data and put them in the data
folder. Replace the train_path
and val_path
of the dataset
in the .json
file within the configs with the path of the preprocessed data (.beton
).
In this section, we provide the instructions to reproduce the results in our paper.
-
Get the origin model. We first train the origin model (ResNet-18) on ImageNet using the following command:
python src/experiment/imagenet_train_from_scratch.py --config-file configs/imagenet_train_from_scratch.json
-
Replace the
model_path
of theblo
in the.json
file in the configs with the path of origin weight (.ckpt
). Find the Worst-case forget set on class-wise unlearning. The selection weight will be saved atfile/experiments/selection_worst_case
.python src/experiment/class_wise_worst_case_mu.py --config-file configs/selection_worst_case.json --train.optimizer.lr 5e-5 --blo.w_lr 1e-4
-
Replace the
training
andtesting
of theindices
in the.json
file in the configs with the path of selection weight (.indices
). Evaluate on the worst-case forget set.# Retrain python src/experiment/evaluation_retrain.py --config-file configs/evaluation_retrain.json --train.optimizer.lr {theta_lr}
If you want to use approxiamte unlearning method(like FT, l1-sparse), you also should replace the
model_path
of thelogging
in the.json
file in the configs with the path of origin weight (.ckpt
).# l1-sparse python src/experiment/evaluation_ft.py --config-file configs/evaluation_retrain.json --train.optimizer.lr {theta_lr} --train.alpha {alpha}
If
indices
arenull
, evaluation will be performed on the random forget set.