Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple GPU training #6

Open
WilliamLwj opened this issue Oct 8, 2020 · 1 comment
Open

Multiple GPU training #6

WilliamLwj opened this issue Oct 8, 2020 · 1 comment

Comments

@WilliamLwj
Copy link

Hi, I am trying to train the model for multiple epochs on two GPUs. Is there a way for me to specify multiple "--cuda" values so that I can use multiple GPUs?

@kamenbliznashki
Copy link
Owner

Hi - the code is only for single GPU training as it is. The easiest way to modify it for multi-GPU is to wrap the model in torch.nn.DataParallel - you can take a look at the pytorch docs here. A faster implementation would be DistributedDataParallel - you can read more about it here. I used this for training a generative model on multiple GPUs and you can look at that implementation here. Hope this helps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants