You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there! Thank you for making this implementation open-source!
I have one question though: Although you have one backward step, you have 2 optimizers. shouldn't you combine both model's parameters and use only one optimizer?
The text was updated successfully, but these errors were encountered:
Thanks for your reply. That is what I am doing. Nevertheless, it seems that while using 2 optimizers the loss lowers way faster than comparing with one optimizer. what might be the reason for this?
Moreover, I have changed the optimizer to Adam but havent been able to get a BCE loss lower than ~0.255 for a multi-label classification problem. Any suggestions?
Hi there! Thank you for making this implementation open-source!
I have one question though: Although you have one backward step, you have 2 optimizers. shouldn't you combine both model's parameters and use only one optimizer?
The text was updated successfully, but these errors were encountered: