Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inquiry Regarding Training Procedure and Model Selection on Learning Rate Adjustment #28

Open
Zhu-Luyu opened this issue Jun 23, 2024 · 2 comments

Comments

@Zhu-Luyu
Copy link

Question:
When the learning rate is adjusted after early stopping, why does the code continue training from the latest saved model instead of reloading the best-performing model? Is this behavior intentional, or could it be a bug?

    early_stopping(acc, model)
    if early_stopping.early_stop:
        cont_train = model.adjust_learning_rate()
        if cont_train:
            print("Learning rate dropped by 10, continue training...")
            early_stopping = EarlyStopping(patience=opt.earlystop_epoch, delta=-0.002, verbose=True)
        else:
            print("Early stopping.")
            break
    model.train()
@Zhu-Luyu
Copy link
Author

It is possible to continue training an overfitted model.

@PeterWang512
Copy link
Owner

In our experience, we didn't observe overfitting with this updating strategy. However, you can feel free to change this code to test if it improves performance. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants