Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fail to reproduce results on llff dataset #41

Open
LolaDeng opened this issue Oct 16, 2022 · 5 comments
Open

Fail to reproduce results on llff dataset #41

LolaDeng opened this issue Oct 16, 2022 · 5 comments
Labels
bug Something isn't working

Comments

@LolaDeng
Copy link

Thanks for the amazing work. I tried to run the scrips on nerf_llff dataset using the provided llff config file but I can't seem to reproduce the results. The average psnr I got was ~17 which is quite a lot lower than the reported results in the dvgo v2 paper.
Could you please suggest changes in configurations that I can make to get better results?

@shiyoung77
Copy link

I got the same issue. The optimization doesn't work well on the llff dataset.

@sunset1995
Copy link
Owner

That's strange. Are you using the data source from https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1?

I just tested again with:

python run.py --config configs/llff/fern.py --render_test
python run.py --config configs/llff/fern_lg.py --render_test

They take 6.5 mins and 10.5 mins achieving PSNR 24.77 and 25.06 respectively.

@francescodisario
Copy link

francescodisario commented Nov 12, 2022

Same happened to me. Average psnr near 17 with same configs. So strange...

Some updates...
Tried on pascal architecture (GTX 1000 series) and avg PSNR on fern is ok (around 24.5), while on our hpc cluster with nvidia a40s we obtain avg psnr of 17. Of course the code is the same, as the environments (actually it's a docker container). Could this be architecture related?

@sunset1995 sunset1995 added the bug Something isn't working label Nov 15, 2022
@robot0321
Copy link

Same happened to me, but it may be solved.
Interestingly, if you switch some options in the llff_default.py file, it suddenly works well (I didn't check why)

from original code

data = dict(
    dataset_type='llff',
    ndc=True,
    width=1008,
    height=756,
)

change like this

data = dict(
    dataset_type='llff',
    ndc=True,
    factor=4,
)

It seems they are not different options but they work differently.
(Again, I didn't check why. It may have some bugs while loading the llff dataset.. or sth?)

I check this three times each.

@AoxiangFan
Copy link

Same happened to me, but it may be solved. Interestingly, if you switch some options in the llff_default.py file, it suddenly works well (I didn't check why)

from original code

data = dict(
    dataset_type='llff',
    ndc=True,
    width=1008,
    height=756,
)

change like this

data = dict(
    dataset_type='llff',
    ndc=True,
    factor=4,
)

It seems they are not different options but they work differently. (Again, I didn't check why. It may have some bugs while loading the llff dataset.. or sth?)

I check this three times each.

This works for me as well

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

6 participants