-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ml-1m数据集结果不一致 #9
Comments
The results To compare with LCFN, you need to use the dataset ml-lcfn, which is the same as the data used in Graph Convolutional Network for Recommendation with Low-pass Collaborative Filters. |
Noted with thanks. I will try ml-lcfn. |
After running on ml-lcfn(with (dropout: 0.5, neg-weight: 0.5),), I got:
NDCG@10: 0.24197656885939905 < 0.2475 Not the same as reported in README, but is acceptable. |
Have you ever read the readme carefully? What is your setting of embedding size? For a fair comparison, we also set the embedding size as 128, which is utilized in the LCFN work. |
Noted! Here is the results:
NDCG@10: 0.24640283029068843 < 0.2475 Much better! |
所以我比较好奇跟selfCF的结果差异如何?可否给我一份你的分隔好训练集和测试集的数据呢? |
附件中是 amazon-games的数据5-core处理完的。 |
谢谢,我大概跑了一下, 在第100轮时结果如下: 看起来比SelfCF好很多?🤔 |
嗯,这个结果确实不错。 能分享一下代码不? 谢谢啦! 您那边还是tensorflow吗?~ |
代码就还是github上的代码 |
嗯,了解。 您之前的数据是用last one,现在是用global-time line分隔,这个对程序上没有影响吗? |
没影响,只要把训练集测试集放在目录下,就可以直接跑了, |
好的,非常感谢! |
不客气~随时交流 |
非常感谢耐心解答。 我刚检查了一下,发现我数据给你错了,上面是一个小样本的测试,全部的在下面: |
Hi,thanks for sharing the code.
With your source code unmodified (dropout: 0.5, neg-weight: 0.5), I have tried on ml-1m and get the following results:
First col: Recall
Second col: NDCG
Which is much lower than in readme:
NDCG@5, 10, 20
0.2457 0.2475 0.2656
May I have your help to reproduce your results on ml1m. Thanks.
The text was updated successfully, but these errors were encountered: