You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to run the train_ngp_nerf_{occ,prop}.py examples on a RTX3090Ti. However I'm always running into the following error when computing the gradient:
Traceback (mostrecentcalllast):
File"nerfacc/examples/train_ngp_nerf_prop.py", line247, in<module>grad_scaler.scale(loss).backward()
File"site-packages/torch/_tensor.py", line487, inbackwardtorch.autograd.backward(
File"site-packages/torch/autograd/__init__.py", line200, inbackwardVariable._execution_engine.run_backward( # Calls into the C++ engine to run the backward passFile"site-packages/torch/autograd/function.py", line274, inapplyreturnuser_fn(self, *args)
File"site-packages/tinycudann/modules.py", line107, inbackwardinput_grad, params_grad=_module_function_backward.apply(ctx, doutput, input, params, output)
File"site-packages/torch/autograd/function.py", line506, inapplyreturnsuper().apply(*args, **kwargs) # type: ignore[misc]File"site-packages/tinycudann/modules.py", line118, inforwardinput_grad, params_grad=ctx_fwd.native_tcnn_module.bwd(ctx_fwd.native_ctx, input, params, output, scaled_grad)
RuntimeError: include/tiny-cuda-nn/gpu_memory.h:459cuMemAddressReserve(&m_base_address, m_max_size, 0, 0, 0) failedwitherrorCUDA_ERROR_OUT_OF_MEMORY
I've tried to play with the init_batch_size and target_sample_batch_size values but it didn't make any difference. On the other hand, the train_mlp_nerf.py runs fine. I don't know If it is related to the configuration of the example, or on the tiny-cuda-nn side, or the nerfacc side.
Any help/feedback would be much appreciated.
The text was updated successfully, but these errors were encountered:
Hi,
I'm trying to run the
train_ngp_nerf_{occ,prop}.py
examples on a RTX3090Ti. However I'm always running into the following error when computing the gradient:I've tried to play with the
init_batch_size
andtarget_sample_batch_size
values but it didn't make any difference. On the other hand, thetrain_mlp_nerf.py
runs fine. I don't know If it is related to the configuration of the example, or on thetiny-cuda-nn
side, or thenerfacc
side.Any help/feedback would be much appreciated.
The text was updated successfully, but these errors were encountered: