You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using bilevel optimization for optimizing a set of hyperparameter and parameters jointly. Unfortunately, I see a large difference between the torchopt.Meta* variants and the ordinary optimizers. Without the meta variants in the implicit solve function the model does not seem to converge at all. One cause seems to be the model parameter detach https://github.com/metaopt/torchopt/blob/main/torchopt/diff/implicit/nn/module.py#L142 but it is not clear to me what I have to do differently to get the normal optimizer working. I restore parameters after the hyperparameter update step.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi,
I'm using bilevel optimization for optimizing a set of hyperparameter and parameters jointly. Unfortunately, I see a large difference between the
torchopt.Meta*
variants and the ordinary optimizers. Without the meta variants in the implicit solve function the model does not seem to converge at all. One cause seems to be the model parameter detach https://github.com/metaopt/torchopt/blob/main/torchopt/diff/implicit/nn/module.py#L142 but it is not clear to me what I have to do differently to get the normal optimizer working. I restore parameters after the hyperparameter update step.Beta Was this translation helpful? Give feedback.
All reactions