You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Script used to run benchmark and stats for the above mentioned commit can be found in this pr #324. Let me know if you have any issues. The evaluation metrics where run on a cluster of A100 GPUs with PyTorch 2.1.2 and cuda-toolkit 11.8. Please note, earlier versions of PyTorch, like v. 2.0.1, have known issues for Splatfacto, so please do not use this PyTorch version for Splatfacto metrics.
I think hardcoding the data-factor as 4 is wrong. The data-factors should be (4, 2, 2, 4, 4, 2, 2) for scenes "bicycle","bonsai", "counter", "garden", "stump", "kitchen", "room" respectively.
@maturk Thank you very much for your prompt response and providing the benchmark script! I did get the same results using the above benchmark script.
I also figured out the reason why my previous results were bad - I didn't explicitly run the evaluation command and seems like simple_trainer_mcmc.py automatically evaluates using the checkpoint at 7000 step. Therefore my previous results were evaluated at 7000 step.
Hi, I wanted to reproduce the evaluation results as stated in this commit, 1cc3d22, on Mip360 data. But I cannot reproduce them.
Right now, the metrics for the 7 scenes of Mip360 are as follows.
MCMC looks a lot worse than Splatfacto.
I am simply using the simple_trainer_mcmc.py with default settings (I didn't change any config params) and my command is shown below:
Could you point to me how I can reproduce the results? Thank you!
Here are some example rendering comparisons between Splatfacto (left) and MCMC (right).
The text was updated successfully, but these errors were encountered: