-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After training by the default args, the result isn't good #1
Comments
Emma, I think that may because: |
By using the above settings, you will get a better result like us, (dropout is an important parameter, our code has been updated to the latest version). recall@50, ndcg@50 |
Thanks, it works! 還想再請教另外一個問題,在training時,我發現到 loss 會往負數的方向不斷下降,變到負的兩三萬以上,請問這樣是否代表 loss 不會收斂呢? 因為似乎可以永無止盡地降下去 |
不是這樣的,因為我們的loss在優化時把常數項(正,但是無梯度)去掉了,你可以看一下論文,所以我們的loss會是負的。但是還是會降到一定程度就收斂的,而且收斂很快(相比於採樣的方法)。 |
hello,请问一下,我跑ENSFM,loss依然为负,确实还在下降但是epoch 501 hr指标 还是0啊(555 |
Excuse me, after training by the default arguments, I get the recall and NDCG score, but the result isn't as good as the report in the paper.
Here is my result after 500 epochs:
Why is the result of pretty lower than the report of the paper?
The text was updated successfully, but these errors were encountered: