You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To generate an adversarial example given a model and a clean example, gradient-based techniques generally move along the loss gradients of the clean example trying to maximise loss.
in my old adv-training implementation, I notice I generate a batch of adversarial examples by perturbing a batch of clean examples. My way tended to aggregate the cross-entropy loss of all the examples in the batch and used that loss for all the points, instead of each point's own CE loss to perturb it. No wonder pure AT did not work in my thesis, elementary error
The fix is manually perturbing each clean example in the batch on its own, computing and using CE loss for that point alone.
Not sure if my word salad here is comprehensible but hey
The text was updated successfully, but these errors were encountered:
To generate an adversarial example given a model and a clean example, gradient-based techniques generally move along the loss gradients of the clean example trying to maximise loss.
in my old adv-training implementation, I notice I generate a batch of adversarial examples by perturbing a batch of clean examples. My way tended to aggregate the cross-entropy loss of all the examples in the batch and used that loss for all the points, instead of each point's own CE loss to perturb it. No wonder pure AT did not work in my thesis, elementary error
The fix is manually perturbing each clean example in the batch on its own, computing and using CE loss for that point alone.
Not sure if my word salad here is comprehensible but hey
The text was updated successfully, but these errors were encountered: