-
Notifications
You must be signed in to change notification settings - Fork 118
Configuring
The configuration script (configure.py
in the code root directory) is written in Python. Python 2.7 or above is required to run it (older versions may work if the argparse
module is included, but they not officially supported). The configuration script selects the source code and generates a Makefile according to specified options. In order to see the list of available options, within the code root directory use:
> python configure.py -h
The following option flags are available. --option [...]
means a parameter is required, and the possible choices for each option are shown in the help message.
-
-h, --help
: show the help message -
--prob [problem_generator]
: select a problem generator (fromsrc/pgen/
matching filename) -
--coord [coordinates]
: select a coordinate system (fromsrc/coordinates/
matching filename) -
--eos [eos]
: select an equation-of-state (adiabatic, isothermal, or general) -
--flux [riemann_solver]
: select a Riemann solver -
--nghost [nghost]
: setNGHOST
to some nonnegative integer -
--nscalars [nscalars]
: setNSCALARS
to some nonnegative integer for number of Passive Scalars species -
-b
: enable magnetic field -
-s
: enable special relativity -
-g
: enable general relativity -
-t
: enable interface frame transformations for GR -
-debug
: enable debug flags; override other compiler options -
-float
: enable single precision (default is double precision) -
-mpi
: enable MPI parallelization -
-omp
: enable OpenMP parallelization -
-hdf5
: enable HDF5 Output (requires the HDF5 library) -
--hdf5_path [path]
: path to HDF5 libraries -
-h5double
: write HDF5 floating point output asH5T_NATIVE_DOUBLE
(default isH5T_NATIVE_REAL
) -
-fft
: enable FFT capabilities (requires the FFTW library) -
--fftw_path [path]
: path to FFTW libraries -
--grav [grav_solver]
: select a self-gravity solver -
--cxx [compiler]
: select a C++ compiler and predefined compiler flags (works with or without-mpi
) -
--ccmd [compiler command]
: set a compiler command overriding--cxx
(when-mpi
is not used) -
--mpiccmd [compiler command]
: set a compiler command overriding--cxx
and-mpi
wrapper compiler syntax -
--cflag="[compiler flags]"
: add additional flags to be given to compiler (appended, allowing for previous flags to be overridden) -
--include [path]
: use-Ipath
when compiling object files -
--lib_path [path]
: use-Lpath
when linking binary executable -
--lib [library]
: use-llibrary
when linking binary executable
Note, the argparse
module allows each long option and parameter to be specified via --option=parameter
or --option parameter
.
Some combinations are prohibited. For example, the HLLD approximate Riemann solver cannot be used without enabling magnetic fields (-b
). In most (but not necessarily all!) such cases, the script will issue a warning and quit. The order of options does not matter. Note that certain sets of options are required for some problem generators (e.g. an MHD problem requires that magnetic fields be enabled), but the script does not check this automatically.
Because mesh refinement (both static and adaptive) is fully integrated into the underlying algorithms, options are not needed to use it.
Some environments have compiler commands different from the standard ones. For example, on some Cray supercomputers, the compiler command is always CC
no matter which compiler is selected. In this case, you can override the compiler command by configuring the code with --ccmd CC
. You may also want to add flags (overriding incompatible, earlier flags generated by the configure script), for example to specify a target architecture. This can be done with the --cflag
option. Note that this option should be set with an =
sign to avoid the script trying to interpret any dashes. For example, --cflag="-O2 -xCORE-AVX512"
.
The linear-wave propagation test on a single core without magnetic fields using the HLLC approximate Riemann solver:
> python configure.py --prob linear_wave --flux hllc
The Orszag-Tang test (a typical 2D MHD test problem) in parallel using MPI on IBM BlueGene/Q, for debugging:
> python configure.py --prob orszag-tang -b --flux hlld --cxx bgxl -mpi -d
An MHD torus problem (similar to Stone & Pringle 2001) in spherical-polar coordinates, with hybrid parallelization and HDF5 output, using the Intel C++ Compiler.
> python configure.py --prob sphtorus --coord spherical_polar -b --flux hlld --cxx icc -mpi -omp -hdf5
While Athena++ supports many different algorithms, some are better than others. For hydrodynamics without magnetic fields, we recommend the HLLC (hllc
) or Roe's (roe
) approximate Riemann solvers, because they are more accurate. For MHD, either the HLLD (hlld
) or Roe's solver are recommended. The HLLD solver is almost as accurate as Roe's, but it is somewhat faster and more robust in most situations.
- + OpenMP parallelization
- + Flexible coordinate systems
- + HDF5 and Parallel IO
- - Corner-Transport-Upwind (CTU) integrator (incompatible with relativity)
Getting Started
User Guide
- Configuring
- Compiling
- The Input File
- Problem Generators
- Boundary Conditions
- Coordinate Systems and Meshes
- Running the Code
- Outputs
- Using MPI and OpenMP
- Static Mesh Refinement
- Adaptive Mesh Refinement
- Load Balancing
- Special Relativity
- General Relativity
- Passive Scalars
- Shearing Box
- Diffusion Processes
- General Equation of State
- FFT
- High-Order Methods
- Super-Time-Stepping
- Orbital Advection
- Rotating System
- Reading Data from External Files
Programmer Guide