You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
#24 adds a multi-GPU PyTorch example that demonstrates how to use Distributed Data Parallel training. However, training with multiple GPUs does not speed up training in the example. See #24 (comment)
It would be worthwhile to monitor the training more closely, for instance the GPU utilization, to understand why this is the case.
The text was updated successfully, but these errors were encountered:
Additional testing described in #24 (comment) shows the GPU utilization is high for two different types of GPUs. The lack of speedup could be related to the relatively small convolutional neural network model.
#24 adds a multi-GPU PyTorch example that demonstrates how to use Distributed Data Parallel training. However, training with multiple GPUs does not speed up training in the example. See #24 (comment)
It would be worthwhile to monitor the training more closely, for instance the GPU utilization, to understand why this is the case.
The text was updated successfully, but these errors were encountered: