-
Notifications
You must be signed in to change notification settings - Fork 68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implementation ambiguity #4
Comments
Hi @pinkfloyd06. I am not the author but I have read the code and I hope that I could answer some of those questions. Do note I too might have made some mistakes, feel free to correct me.
|
@TheShadow29 Thank you for your answer :
|
@pinkfloyd06 |
Hello @xbresson ,
Let me first thank for this work.
l went through your implementation. Hence , l have some question :
1- You set coarsening_levels = 4, however only L[0] and L[2] are used
x = self.graph_conv_cheby(x, self.cl1, L[0], lmax[0], self.CL1_F, self.CL1_K)
andx = self.graph_conv_cheby(x, self.cl2, L[2], lmax[2], self.CL2_F, self.CL2_K)
What about L[1],L[3] and L[4] ?
2- Why lmax[0] and lmax[2] are recalculated in graph_conv_cheby . Even if lmax is a parameter input of this function.
3- Do you mind explaining why your rescale Laplacian eigenvalues to [-1,1] ?
rescale_L(L, lmax)
4- concat(x,x_) is not ued at all in graph_conv_cheby() :
5- What if K=1 in graph_conv_cheby() ?
l noticed that at least K should equal 2.
How can l use that just for K=1 ?
Thank you for your consideration
The text was updated successfully, but these errors were encountered: