Skip to content

Commit

Permalink
Cleanup and update doc (#765)
Browse files Browse the repository at this point in the history
Co-authored-by: Valentin Churavy <[email protected]>
  • Loading branch information
luraess and vchuravy authored Sep 7, 2023
1 parent 06d52e8 commit eaabaa3
Show file tree
Hide file tree
Showing 2 changed files with 21 additions and 10 deletions.
3 changes: 0 additions & 3 deletions .buildkite/pipeline.yml
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,6 @@
if: build.message !~ /\[skip tests\]/
timeout_in_minutes: 60
env:
# JULIA_MPI_TEST_ARRAYTYPE: CuArray
JULIA_MPI_TEST_NPROCS: 2
JULIA_MPI_PATH: "${BUILDKITE_BUILD_CHECKOUT_PATH}/openmpi"
OMPI_ALLOW_RUN_AS_ROOT: 1
Expand Down Expand Up @@ -101,7 +100,6 @@
if: build.message !~ /\[skip tests\]/
timeout_in_minutes: 60
env:
# JULIA_MPI_TEST_ARRAYTYPE: CuArray
JULIA_MPI_TEST_NPROCS: 2
JULIA_MPI_PATH: "${BUILDKITE_BUILD_CHECKOUT_PATH}/openmpi"
OMPI_ALLOW_RUN_AS_ROOT: 1
Expand Down Expand Up @@ -191,7 +189,6 @@
soft_fail:
- exit_status: 1
env:
# JULIA_MPI_TEST_ARRAYTYPE: ROCArray
JULIA_MPI_TEST_NPROCS: 2
JULIA_MPI_PATH: "${BUILDKITE_BUILD_CHECKOUT_PATH}/openmpi"
OMPI_ALLOW_RUN_AS_ROOT: 1
Expand Down
28 changes: 21 additions & 7 deletions docs/src/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,10 @@ clusters or multi-GPU machines, you will probably want to configure against a
system-provided MPI implementation in order to exploit features such as fast network
interfaces and CUDA-aware or ROCm-aware MPI interfaces.

The MPIPreferences.jl package allows the user to choose which MPI implementation to use in MPI.jl. It uses [Preferences.jl](https://github.com/JuliaPackaging/Preferences.jl) to
configure the MPI backend for each project separately. This provides
a single source of truth that can be used for JLL packages (Julia packages providing C libraries)
that link against MPI. It can be installed by
The MPIPreferences.jl package allows the user to choose which MPI implementation to use in MPI.jl. It uses
[Preferences.jl](https://github.com/JuliaPackaging/Preferences.jl) to configure the MPI backend for each
project separately. This provides a single source of truth that can be used for JLL packages (Julia packages
providing C libraries) that link against MPI. It can be installed by

```sh
julia --project -e 'using Pkg; Pkg.add("MPIPreferences")'
Expand Down Expand Up @@ -182,13 +182,27 @@ julia> MPIPreferences.use_system_binary()
(MPI) pkg> test
```

### Testing GPU-aware buffers
The test suite can target CUDA-aware interface with [`CUDA.CuArray`](https://github.com/JuliaGPU/CUDA.jl)
and the ROCm-aware interface with [`AMDGPU.ROCArray`](https://github.com/JuliaGPU/AMDGPU.jl) upon selecting
the corresponding `test_args` kwarg when calling `Pkg.test`.

Run Pkg.test with `--backend=CUDA` to test CUDA-aware MPI buffers
```
import Pkg; Pkg.test("MPI"; test_args=["--backend=CUDA"])
```
and with `--backend=AMDGPU` to test ROCm-aware MPI buffers
```
import Pkg; Pkg.test("MPI"; test_args=["--backend=AMDGPU"])
```

!!! note
The `JULIA_MPI_TEST_ARRAYTYPE` environment variable has no effect anymore.

### Environment variables
The test suite can also be modified by the following variables:

- `JULIA_MPI_TEST_NPROCS`: How many ranks to use within the tests
- `JULIA_MPI_TEST_ARRAYTYPE`: Set to `CuArray` or `ROCArray` to test the CUDA-aware interface with
[`CUDA.CuArray`](https://github.com/JuliaGPU/CUDA.jl) or the ROCm-aware interface with
[`AMDGPU.ROCArray`](https://github.com/JuliaGPU/AMDGPU.jl) or buffers.
- `JULIA_MPI_TEST_BINARY`: Check that the specified MPI binary is used for the tests
- `JULIA_MPI_TEST_ABI`: Check that the specified MPI ABI is used for the tests

Expand Down

0 comments on commit eaabaa3

Please sign in to comment.