You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've followed the playbook's guidelines and done two "studies", but the results i've received aren't covered in the playbook:
Batch tuning (16 to 512, intervals of power of 2)
Learning rate + beta1 tuning on Adam (9 different combinations with 3 different scales of 10, and 3 different beta1 values for each- 0.8, 0.9, 0.95)
Both studies didn't show a significant influence on both training&validation curves.
From a short Q&A with "ChatGPT and reddit" it might be due to inappropriate combination of model complexity and data (assuming my data is cleaned).
I would greatly appreciate if you could even in several sentences, add to the playbook your 2 cents about your next "play" when the tuning doesn't influence much on the curves and the results.
The text was updated successfully, but these errors were encountered:
I've followed the playbook's guidelines and done two "studies", but the results i've received aren't covered in the playbook:
Both studies didn't show a significant influence on both training&validation curves.
From a short Q&A with "ChatGPT and reddit" it might be due to inappropriate combination of model complexity and data (assuming my data is cleaned).
I would greatly appreciate if you could even in several sentences, add to the playbook your 2 cents about your next "play" when the tuning doesn't influence much on the curves and the results.
The text was updated successfully, but these errors were encountered: