Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request to cover "Insensitivity to tuning" in the playbook #72

Open
21kc-caracol opened this issue Jul 9, 2024 · 0 comments
Open

Request to cover "Insensitivity to tuning" in the playbook #72

21kc-caracol opened this issue Jul 9, 2024 · 0 comments

Comments

@21kc-caracol
Copy link

I've followed the playbook's guidelines and done two "studies", but the results i've received aren't covered in the playbook:

  1. Batch tuning (16 to 512, intervals of power of 2)
  2. Learning rate + beta1 tuning on Adam (9 different combinations with 3 different scales of 10, and 3 different beta1 values for each- 0.8, 0.9, 0.95)

Both studies didn't show a significant influence on both training&validation curves.

From a short Q&A with "ChatGPT and reddit" it might be due to inappropriate combination of model complexity and data (assuming my data is cleaned).

I would greatly appreciate if you could even in several sentences, add to the playbook your 2 cents about your next "play" when the tuning doesn't influence much on the curves and the results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant