-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
missing .npz files for famous dataset #18
Comments
HI @Linusnie , did you manage to resolve the above issue? I'm facing the same. |
@kbmufti kind of, I moved on to using the pytorch version, where the pre-processing is applied automatically if you pass in a .xyz file see here I should say the sampling settings in the pytorch repo are slightly different from the tensorflow implementation, so it would still be nice to have the original |
@Linusnie , Thank you for you response, What version of python are you using? I am getting dependencies conflicts |
I followed the instructions here #2 (comment) |
hi, many thanks for you work on this method
I'm looking into reproducing your results on the Famous dataset. But I noticed that the .npz files are missing from the download link in the readme (https://drive.google.com/drive/folders/1qre9mgJNCKiX11HnZO10qMZMmPv_gnh3?usp=sharing). Would you be able to add those files in?
I also tried recreating the point clouds with
python sample_query_point.py --out_dir /home/linus/workspace/data/neural_pull/famous_new/ --input_dir /home/linus/workspace/data/points2surf/famous_noisefree/04_pts/ --dataset famous
, but I get the errorSince the 3DBenchy dataset has less than
POINT_NUM_GT
(=20000) points. Could you clarify what value ofPOINT_NUM_GT
was used for the famous dataset?The text was updated successfully, but these errors were encountered: