You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
During hyperparameter optimisation, some parameter setup may lead to exploding/vanishing gradients that yield NA loss values. This kills the training, but it should be allowed to continue the next hpo iteration with a different parameter combination. In such cases, setting the loss value to a high number should solve the issue, so the optimization avoids that parameter combination and training continues.
The text was updated successfully, but these errors were encountered:
During hyperparameter optimisation, some parameter setup may lead to exploding/vanishing gradients that yield NA loss values. This kills the training, but it should be allowed to continue the next hpo iteration with a different parameter combination. In such cases, setting the loss value to a high number should solve the issue, so the optimization avoids that parameter combination and training continues.
The text was updated successfully, but these errors were encountered: