Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support GPUs on multiple machines (via docker-swarm or kubernetes)? #19

Open
Ledenel opened this issue Oct 20, 2020 · 1 comment
Open
Labels
feature-request Request for a new feature

Comments

@Ledenel
Copy link

Ledenel commented Oct 20, 2020

Feature description:

Support docker-swarm (with GPUs support) out-of-the-box.

Problem and motivation:

As here describes, CURRENTLY it is not possible to run ml-hub with GPU support across multiple machines (while every machine may have one or more GPU cards). Since it is not easy to build a kubernetes cluster with GPU support and management (and I'm not farmiliar with kubernetes), maybe a more lightweight solution (like docker-swarm?) would support it more seamlessly (via nvidia-docker).

Is this something you're interested in working on?

Yes

@Ledenel Ledenel added the feature-request Request for a new feature label Oct 20, 2020
@Ledenel
Copy link
Author

Ledenel commented Oct 20, 2020

By the way, kubernetes seems to support GPU management via Device plugins https://kubernetes.io/docs/tasks/manage-gpus/scheduling-gpus/#using-device-plugins. So why the gpu mode is not supported in kubernetes? Is it due to lack of standards, historical reasons, or just waiting someone to implement?

@Ledenel Ledenel changed the title Support docker swarm (with GPUs)? Support GPUs on multiple machines (via docker-swarm or kubernetes)? Oct 20, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request Request for a new feature
Projects
None yet
Development

No branches or pull requests

1 participant