This is a basic Julia wrapper for the portable message passing system Message Passing Interface (MPI). Inspiration is taken from mpi4py, although we generally follow the C and not the C++ MPI API. (The C++ MPI API is deprecated.)
CMake is used to piece together the MPI wrapper. Currently a shared library MPI installation for C and Fortran is required (tested with Open MPI and MPICH). To install MPI.jl using the Julia packaging system, run
Pkg.update()
Pkg.add("MPI")
Alternatively,
Pkg.clone("https://github.com/JuliaParallel/MPI.jl.git")
Pkg.build()
which will build and install the wrapper into $HOME/.julia/vX.Y/MPI
.
- If you are trying to build on OSX with Homebrew, the necessary Fortran headers are not included in the OpenMPI bottle. To workaround this you can build OpenMPI from source:
brew install --build-from-source openmpi
Currently, MPI.jl relies on CMake for building a few C/Fortran source files needed by the library. Unfortunately, CMake does not follow the PATH
variable when determining which compiler to use, which could cause problem if the compiler you want to use does not reside in a standard directory like /usr/bin
. You can override CMake's detection of the compiler by specifying the environment variables CC
, CXX
, and FC
on the command line. The following example forces the compilation process to use the compilers found in the path:
CC=$(which gcc) CXX=$(which g++) FC=$(which gfortran) julia -e 'Pkg.add("MPI")'
It may be that CMake selects the wrong MPI version, or that CMake fails to correctly detect and configure your MPI implementation. You can override CMake's mechanism by setting certain environment variables:
JULIA_MPI_C_LIBRARIES
JULIA_MPI_Fortran_INCLUDE_PATH
JULIA_MPI_Fortran_LIBRARIES
This will set MPI_C_LIBRARIES
, MPI_Fortran_INCLUDE_PATH
, and
MPI_Fortran_LIBRARIES
when calling CMake as described in its FindMPI module.
You can set these variables either in your shell startup file, or e.g. via your
~/.juliarc
file. Here is an example:
ENV["JULIA_MPI_C_LIBRARIES"] = "-L/opt/local/lib/openmpi-gcc5 -lmpi"
ENV["JULIA_MPI_Fortran_INCLUDE_PATH"] = "-I/opt/local/include"
ENV["JULIA_MPI_Fortran_LIBRARIES"] = "-L/opt/local/lib/openmpi-gcc5 -lmpi_usempif08 -lmpi_mpifh -lmpi"
You can set other configuration variables as well (by adding a JULIA_
prefix);
the full list of variables currently supported is
MPI_C_COMPILER
MPI_C_COMPILE_FLAGS
MPI_C_INCLUDE_PATH
MPI_C_LINK_FLAGS
MPI_C_LIBRARIES
MPI_Fortran_COMPILER
MPI_Fortran_COMPILE_FLAGS
MPI_Fortran_INCLUDE_PATH
MPI_Fortran_LINK_FLAGS
MPI_Fortran_LIBRARIES
MPI_INCLUDE_PATH
MPI_LIBRARIES
You need to install the Microsoft MPI runtime on your system (the SDK is not required). Then simply add the MPI.jl package with
Pkg.update()
Pkg.add("MPI")
If you would like to wrap an MPI function on Windows, keep in mind you may need to add its signature to src/win_mpiconstants.jl
.
To run a Julia script with MPI, first make sure that using MPI
or
import MPI
is included at the top of your script. You should then be
able to run the MPI job as expected, e.g., with
mpirun -np 3 julia 01-hello.jl
In Julia code building on this package, it may happen that you want to run MPI cleanup functions in a finalizer.
This makes it impossible to manually call MPI.Finalize()
, since the Julia finalizers may run after this call.
To solve this, a C atexit
hook to run MPI.Finalize()
can be set using MPI.finalize_atexit()
. It is possible
to check if this function was called by checking the global Ref
MPI.FINALIZE_ATEXIT
.
In order for MPI calls to be made from a Julia cluster, it requires the use of
MPIManager, a cluster manager that will start the julia workers using mpirun
Currently MPIManager only works with Julia 0.4 . It has three modes of operation
-
Only worker processes execute MPI code. The Julia master process executes outside of and is not part of the MPI cluster. Free bi-directional TCP/IP connectivity is required between all processes
-
All processes (including Julia master) are part of both the MPI as well as Julia cluster. Free bi-directional TCP/IP connectivity is required between all processes.
-
All processes are part of both the MPI as well as Julia cluster. MPI is used as the transport for julia messages. This is useful on environments which do not allow TCP/IP connectivity between worker processes
An example is provided in examples/05-juliacman.jl
.
The julia master process is NOT part of the MPI cluster. The main script should be
launched directly, MPIManager internally calls mpirun
to launch julia/mpi workers.
All the workers started via MPIManager will be part of the MPI cluster.
MPIManager(;np=Sys.CPU_THREADS, mpi_cmd=false, launch_timeout=60.0)
If not specified, mpi_cmd
defaults to mpirun -np $np
stdout
from the launched workers is redirected back to the julia session calling addprocs
via a TCP connection.
Thus the workers must be able to freely connect via TCP to the host session.
The following lines will be typically required on the julia master process to support both julia and mpi
# to import MPIManager
using MPI
# need to also import Distributed to use addprocs()
using Distributed
# specify, number of mpi workers, launch cmd, etc.
manager=MPIManager(np=4)
# start mpi workers and add them as julia workers too.
addprocs(manager)
To execute code with MPI calls on all workers, use @mpi_do
.
@mpi_do manager expr
executes expr
on all processes that are part of manager
For example:
@mpi_do manager (comm=MPI.COMM_WORLD; println("Hello world, I am $(MPI.Comm_rank(comm)) of $(MPI.Comm_size(comm))")
executes on all mpi workers belonging to manager
only
examples/05-juliacman.jl
is a simple example of calling MPI functions on all workers
interspersed with Julia parallel methods. cd
to the examples
directory and run julia 05-juliacman.jl
A single instation of MPIManager
can be used only once to launch MPI workers (via addprocs
).
To create multiple sets of MPI clusters, use separate, distinct MPIManager
objects.
procs(manager::MPIManager) returns a list of julia pids belonging to manager
mpiprocs(manager::MPIManager) returns a list of MPI ranks belonging to manager
Fields j2mpi
and mpi2j
of MPIManager
are associative collections mapping julia pids to MPI ranks and vice-versa.
- Useful on environments which do not allow TCP connections outside of the cluster
- An example is in
examples/06-cman-transport.jl
mpirun -np 5 julia 06-cman-transport.jl TCP
This launches a total of 5 processes, mpi rank 0 is the julia pid 1. mpi rank 1 is julia pid 2 and so on.
The program must call MPI.start(TCP_TRANSPORT_ALL)
with argument TCP_TRANSPORT_ALL
.
On mpi rank 0, it returns a manager
which can be used with @mpi_do
On other processes (i.e., the workers) the function does not return
MPI.start
must be called with option MPI_TRANSPORT_ALL
to use MPI as transport.
mpirun -np 5 julia 06-cman-transport.jl MPI
will run the example using MPI as transport.
Julia interfaces to the Fortran versions of the MPI functions. Since the C and Fortran communicators are different, if a C communicator is required (e.g., to interface with a C library), this can be achieved with the Fortran to C communicator conversion:
juliacomm = MPI.COMM_WORLD
ccomm = MPI.CComm(juliacomm)
Convention: MPI_Fun => MPI.Fun
Constants like MPI_SUM
are wrapped as MPI.SUM
. Note also that
arbitrary Julia functions f(x,y)
can be passed as reduction operations
to the MPI Allreduce
and Reduce
functions.
Julia Function (assuming import MPI ) |
Fortran Function |
---|---|
MPI.Abort |
MPI_Abort |
MPI.Comm_dup |
MPI_Comm_dup |
MPI.Comm_free |
MPI_Comm_free |
MPI.Comm_get_parent |
MPI_Comm_get_parent |
MPI.Comm_rank |
MPI_Comm_rank |
MPI.Comm_size |
MPI_Comm_size |
MPI.Comm_spawn |
MPI_Comm_spawn |
MPI.Finalize |
MPI_Finalize |
MPI.Finalized |
MPI_Finalized |
MPI.Get_address |
MPI_Get_address |
MPI.Init |
MPI_Init |
MPI.Initialized |
MPI_Initialized |
MPI.Intercomm_merge |
MPI_Intercomm_merge |
MPI.mpitype |
MPI_Type_create_struct and MPI_Type_commit (see: mpitype note) |
mpitype note: This is not strictly a wrapper for
MPI_Type_create_struct
and MPI_Type_commit
, it also is an accessor for
previously created types.
Julia Function (assuming import MPI ) |
Fortran Function |
---|---|
MPI.Cancel! |
MPI_Cancel |
MPI.Get_count |
MPI_Get_count |
MPI.Iprobe |
MPI_Iprobe |
MPI.Irecv! |
MPI_Irecv |
MPI.Isend |
MPI_Isend |
MPI.Probe |
MPI_Probe |
MPI.Recv! |
MPI_Recv |
MPI.Send |
MPI_Send |
MPI.Test! |
MPI_Test |
MPI.Testall! |
MPI_Testall |
MPI.Testany! |
MPI_Testany |
MPI.Testsome! |
MPI_Testsome |
MPI.Wait! |
MPI_Wait |
MPI.Waitall! |
MPI_Waitall |
MPI.Waitany! |
MPI_Waitany |
MPI.Waitsome! |
MPI_Waitsome |
Julia Function (assuming import MPI ) |
Fortran Function |
---|---|
MPI.Allgather |
MPI_Allgather |
MPI.Allgatherv |
MPI_Allgatherv |
MPI.Alltoall |
MPI_Alltoall |
MPI.Alltoallv |
MPI_Alltoallv |
MPI.Barrier |
MPI_Barrier |
MPI.Bcast! |
MPI_Bcast |
MPI.Exscan |
MPI_Exscan |
MPI.Gather |
MPI_Gather |
MPI.Gatherv |
MPI_Gatherv |
MPI.Reduce |
MPI_Reduce |
MPI.Scan |
MPI_Scan |
MPI.Scatter |
MPI_Scatter |
MPI.Scatterv |
MPI_Scatterv |
Julia Function (assuming import MPI ) |
Fortran Function |
---|---|
MPI.Win_create |
MPI_Win_create |
MPI.Win_create_dynamic |
MPI_Win_create_dynamic |
MPI.Win_allocate_shared |
MPI_Win_allocate_shared |
MPI.Win_shared_query |
MPI_Win_shared_query |
MPI.Win_attach |
MPI_Win_attach |
MPI.Win_detach |
MPI_Win_detach |
MPI.Win_fence |
MPI_Win_fence |
MPI.Win_flush |
MPI_Win_flush |
MPI.Win_free |
MPI_Win_free |
MPI.Win_sync |
MPI_Win_sync |
MPI.Win_lock |
MPI_Win_lock |
MPI.Win_unlock |
MPI_Win_unlock |
MPI.Get |
MPI_Get |
MPI.Put |
MPI_Put |
MPI.Fetch_and_op |
MPI_Fetch_and_op |
MPI.Accumulate |
MPI_Accumulate |
MPI.Get_accumulate |
MPI_Get_accumulate |