Config to limit CUDA memory usage #20
Replies: 7 comments 3 replies
-
Salute! Thanks for testing and reporting. I've got a spare 3080 Ti on my desk but not enough space to fit it in my PC. Could you provide insights on how you managed with the original setup? Which cmdline arguments did you use? Any configuration tips are welcome. If you could help us out with this issue it would be also great. Thanks in advance! |
Beta Was this translation helpful? Give feedback.
-
I have OOM error on my own dataset too.
|
Beta Was this translation helpful? Give feedback.
-
I've a lot of big datasets which OOM easily depends on what the "percent_dense" and image resolution I'm using. One of the most straightforward means of using less memory is not storing all the images in CUDA memory and instead loading them in at each iteration, they can be loaded asynchronously at the same time as the image is rendered e.g.
Perhaps has some overhead in terms of processing speed but in my usage it's not much different, especially when the images are pinned beforehand. |
Beta Was this translation helpful? Give feedback.
-
I have created an issue: #21 |
Beta Was this translation helpful? Give feedback.
-
Compute capability and cuda dependencies are solved. However, this comes with a cmake upgrade to 3.24. |
Beta Was this translation helpful? Give feedback.
-
@BennetLeff @oliver-batchelor @ihorizons2022 |
Beta Was this translation helpful? Give feedback.
-
I think we have a solution if nobody complains. I will close this discussion. |
Beta Was this translation helpful? Give feedback.
-
Howdy! I got everything building and running on my 3080ti (so there's a new card confirmed working).
The truck demo scene worked fine. Later I tried to import my own scene and torch reserved too much GPU memory. I don't have this problem with the same dataset in the python/original implementation. This project would be more accessible if some configs existed to control this! I don't have time at the moment to fix it but might soon :)
Beta Was this translation helpful? Give feedback.
All reactions