You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you very much for providing the valuable benchmark for the MU community.
I notice that you report the IRA and CRA metrics for the SEOT method in Table.2 . After I check your script in sampling_unlearned_models/seot.py, I would like to inquire, based on my understanding, that the optimization goal of the SEOT method is the embedding of concepts rather than the model parameters. For example, if I need to erase the concept of "dog," then the embedding of "dog" should be optimized while fixing the parameters of the model and the embeddings of other objects. In this case, the IRA and CRA should assess the capabilities of the pre-trained stable diffusion model. Why is it not the highest in the method?(especially low performance 84.31%/82.71% for object unlearning in CRA)
Thank you!
The text was updated successfully, but these errors were encountered:
You are correct on the understanding of the SEOT method. Here, the IRA and CRA look into the retainability of all other concepts. In this case, when the embedding of some concept, such as "Dog" or "Abstractionism", are modified, it is very likely this will exert a negative influence on other concepts as well. This is why when we check the retainability, it is very low. I did not see any theoretical guarantee on that SEOT should achieve the highest CRA/IRA among other methods. If you do, please feel free to elaborate more on this.
For example, if I need to erase the concept of "dog", according to the EOT method, we need to provide the token indices of "dog" in the corresponding sentences, after which the embedding of "dog" will be optimized. However, other object concepts like "cat" or style concepts such as "Van Gogh" are not optimized since they don't belong to the target concept in this setting. Therefore, when we check the retainability, the original embeddings of "cat" or "Van Gogh" are fed into the fine-tuned model that you provided in Google Drive. In this case, I believe SEOT should achieve the highest CRA/IRA among other methods.
Here is my understanding of your benchmark regarding the SEOT method. If there are any inaccuracies, I kindly ask you to point them out. Thank you!
Thank you very much for providing the valuable benchmark for the MU community.
I notice that you report the IRA and CRA metrics for the SEOT method in Table.2 . After I check your script in
sampling_unlearned_models/seot.py
, I would like to inquire, based on my understanding, that the optimization goal of the SEOT method is the embedding of concepts rather than the model parameters. For example, if I need to erase the concept of "dog," then the embedding of "dog" should be optimized while fixing the parameters of the model and the embeddings of other objects. In this case, the IRA and CRA should assess the capabilities of the pre-trained stable diffusion model. Why is it not the highest in the method?(especially low performance 84.31%/82.71% for object unlearning in CRA)Thank you!
The text was updated successfully, but these errors were encountered: