Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to run NGP examples #251

Open
cecabert opened this issue Aug 11, 2023 · 1 comment
Open

Unable to run NGP examples #251

cecabert opened this issue Aug 11, 2023 · 1 comment

Comments

@cecabert
Copy link

Hi,

I'm trying to run the train_ngp_nerf_{occ,prop}.py examples on a RTX3090Ti. However I'm always running into the following error when computing the gradient:

Traceback (most recent call last):
  File "nerfacc/examples/train_ngp_nerf_prop.py", line 247, in <module>
    grad_scaler.scale(loss).backward()
  File "site-packages/torch/_tensor.py", line 487, in backward
    torch.autograd.backward(
  File "site-packages/torch/autograd/__init__.py", line 200, in backward
    Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass
  File "site-packages/torch/autograd/function.py", line 274, in apply
    return user_fn(self, *args)
  File "site-packages/tinycudann/modules.py", line 107, in backward
    input_grad, params_grad = _module_function_backward.apply(ctx, doutput, input, params, output)
  File "site-packages/torch/autograd/function.py", line 506, in apply
    return super().apply(*args, **kwargs)  # type: ignore[misc]
  File "site-packages/tinycudann/modules.py", line 118, in forward
    input_grad, params_grad = ctx_fwd.native_tcnn_module.bwd(ctx_fwd.native_ctx, input, params, output, scaled_grad)
RuntimeError: include/tiny-cuda-nn/gpu_memory.h:459 cuMemAddressReserve(&m_base_address, m_max_size, 0, 0, 0) failed with error CUDA_ERROR_OUT_OF_MEMORY

I've tried to play with the init_batch_size and target_sample_batch_size values but it didn't make any difference. On the other hand, the train_mlp_nerf.py runs fine. I don't know If it is related to the configuration of the example, or on the tiny-cuda-nn side, or the nerfacc side.

Any help/feedback would be much appreciated.

@yvdu
Copy link

yvdu commented May 5, 2024

I have the same problem here with

RuntimeError: D:/anaconda/envs/base2/Lib/site-packages/tiny-cuda-nn/include\tiny-cuda-nn/gpu_memory.h:592 cuMemSetAccess(m_base_address + m_size, n_bytes_to_allocate, &access_desc, 1) failed: CUDA_ERROR_OUT_OF_MEMORY

I've minimized the batch size but it's not working

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants