From b228cc64f2d5498ec6637191242ed082417f4cb3 Mon Sep 17 00:00:00 2001 From: ImmanuelSegol <3ditds@gmail.com> Date: Wed, 21 Feb 2024 07:21:27 -0500 Subject: [PATCH 1/2] Update docs/icicle/multi-gpu.md Co-authored-by: Jeremy Felder --- docs/icicle/multi-gpu.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/icicle/multi-gpu.md b/docs/icicle/multi-gpu.md index 13a0802..a25c4a0 100644 --- a/docs/icicle/multi-gpu.md +++ b/docs/icicle/multi-gpu.md @@ -51,7 +51,7 @@ To dive deeper and learn about the API checkout the docs for our different ICICL ## ZKContainer support for multi GPUs -Multi GPU support should work with ZK-Containers by simple defining which devices the docker container should interact with: +Multi GPU support should work with ZK-Containers by simply defining which devices the docker container should interact with: ```sh docker run -it --gpus '"device=0,2"' zk-container-image From b56ee6a00dc9d23347d6db808fed4a4c662b8bc2 Mon Sep 17 00:00:00 2001 From: ImmanuelSegol <3ditds@gmail.com> Date: Wed, 21 Feb 2024 07:21:58 -0500 Subject: [PATCH 2/2] Apply suggestions from code review Co-authored-by: Jeremy Felder --- docs/icicle/multi-gpu.md | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/icicle/multi-gpu.md b/docs/icicle/multi-gpu.md index a25c4a0..5adf265 100644 --- a/docs/icicle/multi-gpu.md +++ b/docs/icicle/multi-gpu.md @@ -13,7 +13,7 @@ Multi GPU is an approach at developing software to run on multiple GPU devices. ## Approaches to Multi GPU programming -There are many [different strategies](https://github.com/NVIDIA/multi-gpu-programming-models) available for implementing multi GPU, whoever they really split into two catagories. +There are many [different strategies](https://github.com/NVIDIA/multi-gpu-programming-models) available for implementing multi GPU, however, it can be split into two categories. ### GPU Server approach @@ -23,19 +23,19 @@ This approach usually involves a single or multiple CPUs opening threads to read This approach wont let us tackle larger computation sizes but it will allow us to compute multiple computations which we wouldn't be able to load onto a single GPU. -For example lets say that you had to compute two MSM of size 2^20 on a 16GB VRAM GPU you would normally have preform them asynchronously. How ever if you double the number of GPUs in your system you can now run them in parallel. +For example lets say that you had to compute two MSMs of size 2^20 on a 16GB VRAM GPU you would normally have to perform them asynchronously. However, if you double the number of GPUs in your system you can now run them in parallel. ### Inter GPU approach -This approach involves a more sphisticated approch to multi GPU computation. Using technologies such as [GPUDirect, NCCL, NVSHMEM](https://www.nvidia.com/en-us/on-demand/session/gtcspring21-cwes1084/) and NVLink its possible to combine multiple GPUs and split a computation amongst the different devices. +This approach involves a more sophisticated approach to multi GPU computation. Using technologies such as [GPUDirect, NCCL, NVSHMEM](https://www.nvidia.com/en-us/on-demand/session/gtcspring21-cwes1084/) and NVLink its possible to combine multiple GPUs and split a computation among different devices. -This approach requires redesigning the algorithm at the software level to be compatible with splitting amongst devices. In some cases to lower latency to a minimum special inter GPU connections would be installed on a server to allow GPU direct communication with each other. +This approach requires redesigning the algorithm at the software level to be compatible with splitting amongst devices. In some cases, to lower latency to a minimum, special inter GPU connections would be installed on a server to allow direct communication between multiple GPUs. # Writing ICICLE Code for Multi GPUs -The approach we have taken for the moment is a GPU Server approach, we assume you have a machine with multiple GPUs and you wish to run some computation on each GPU. +The approach we have taken for the moment is a GPU Server approach; we assume you have a machine with multiple GPUs and you wish to run some computation on each GPU. To dive deeper and learn about the API checkout the docs for our different ICICLE API