Does raster-vision support multi gpu usage? #1779
-
Hello, I have a machine learning computer supporting two Nvidia Geforce 3090's. I am curious if raster-vision supports multi gpu usage and if so, what is the protocol for utilizing both gpus. I am using python version 3.9, cuda version 11.6, pytorch 1.12.1, torchvision 0.13.1, rastervision 0.20.2, on a Linux box running 22.04. Thanks, |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
There are two ways of using RV: as a low-code framework (see https://docs.rastervision.io/en/0.20/framework/index.html) and as a library. If you use it as a framework, then it will use the built-in training code which does not support multi-gpu training. However, if you use it as a library it's possible to use your own training code in which case you can implement multi-gpu training. For example, if you use it with Lightning for training this is possible. See https://docs.rastervision.io/en/0.20/usage/tutorials/lightning_workflow.html |
Beta Was this translation helpful? Give feedback.
There are two ways of using RV: as a low-code framework (see https://docs.rastervision.io/en/0.20/framework/index.html) and as a library. If you use it as a framework, then it will use the built-in training code which does not support multi-gpu training. However, if you use it as a library it's possible to use your own training code in which case you can implement multi-gpu training. For example, if you use it with Lightning for training this is possible. See https://docs.rastervision.io/en/0.20/usage/tutorials/lightning_workflow.html