-
Notifications
You must be signed in to change notification settings - Fork 118
SMR and AMR
Static Mesh Refinement means that some regions of the computational domain are covered by finer grids but they are fixed and not adaptive. This is not as flexible as AMR, but this is faster (and easier to analyze) than AMR and is often useful enough in some problems in astrophysics. So we recommend to use SMR instead of AMR if the regions of interest are known in advance. For the details of the grid structure, see Static Mesh Refinement.
Let us go back to the blast wave test in Cartesian coordinate again, and say we want to resolve the region near the center of the explosion (note: this is just an example and not physically motivated - you probably want to resolve the shock front, and see below for such applications with AMR).
First of all, because SMR involves complicated grid structure, the HDF5 output is strongly recommended. To configure the code with HDF5,
> python configure.py --prob blast --flux hlld -b -hdf5
This configuration includes magnetic fields. To enable MPI parallelization, simply add "-m" option. Note that no additional option is required for SMR and AMR.
In Let us place grids near the center where the explosion occurs. First, you have to set refinement flag to static in the input file. Then the computational domain must be decomposed into smaller MeshBlocks, because a MeshBlock is the unit of refinement. As smaller MeshBlocks are more flexible but computationally inefficient, the size of Meshblocks must be set carefully. The following example produces a root grid of 64^3 cells and MeshBlocks with 16^3 cells:
<mesh>
nx1 = 64 # Number of zones in X1-direction
x1min = -1.0 # minimum value of X1
x1max = 1.0 # maximum value of X1
ix1_bc = periodic # inner-X1 boundary flag
ox1_bc = periodic # outer-X1 boundary flag
nx2 = 64 # Number of zones in X2-direction
x2min = -1.0 # minimum value of X2
x2max = 1.0 # maximum value of X2
ix2_bc = periodic # inner-X2 boundary flag
ox2_bc = periodic # outer-X2 boundary flag
nx3 = 64 # Number of zones in X3-direction
x3min = -1.0 # minimum value of X3
x3max = 1.0 # maximum value of X3
ix3_bc = periodic # inner-X3 boundary flag
ox3_bc = periodic # outer-X3 boundary flag
num_threads = 1 # Number of OpenMP threads per process
refinement = static
<meshblock>
nx1 = 16
nx2 = 16
nx3 = 16
Then we have to specify regions to be refined. In order to get twice higher resolution near the center, add the following block:
<refinement1>
x1min=-0.5
x1max= 0.5
x2min=-0.5
x2max= 0.5
x3min=-0.5
x3max= 0.5
level=1
You can add more blocks like refinement2, refinement3... It is OK even if the refinement regions overlap each other - the code generates the minimum set of the grid satisfying all the conditions. If you want to get even higher resolution, simply set a higher level.
Before actually running the simulation, it is highly recommended to check the grid structure by running the code with -m option.
> ~/athena/bin/athena -i athinput.example -m 1
Root grid = 4 x 4 x 4 MeshBlocks
Total number of MeshBlocks = 120
Number of physical refinement levels = 1
Number of logical refinement levels = 3
Physical level = 0 (logical level = 2): 56 MeshBlocks, cost = 56
Physical level = 1 (logical level = 3): 64 MeshBlocks, cost = 64
Number of parallel ranks = 1
Rank = 0: 120 MeshBlocks, cost = 120
Load Balancing:
Minimum cost = 1, Maximum cost = 1, Average cost = 1
See the 'mesh_structure.dat' file for a complete list of MeshBlocks.
Use 'python ../vis/python/plot_mesh.py' or gnuplot to visualize mesh structure.
This outputs statistics about MeshBlocks created, as well as load balance if you specify the number of processes you want to use with the "-m" option. This option also generates "mesh_structure.dat", which can be read using gnuplot.
> gnuplot
> splot "mesh_structure.dat" w l
Please note that even though there are 4^3 MeshBlocks in the root level and 4^3 MeshBlocks in the refined level, the total number of MeshBlocks is actually 120. This is because Athena++ does not solve overlapping MeshBlocks, and replaces one coarse MeshBlocks with 8 (in 3D, 4 in 2D) fine MeshBlocks. This often results in a strange number (not power of 2) of MeshBlocks but you do not have to worry about it - it is not wasting computational resources, but actually is saving. You just need to assign a reasonable number of processes for parallel calculations.
If the grid structure looks as expected, you can run it and visualize the data as usual.
You can visualize the mesh structure using the "Mesh" plot on VisIt. However this plot can be too crowded because it draws all the cells. The "IndexSelect" operator is useful for extracting the borders between MeshBlocks.
Getting Started
User Guide
- Configuring
- Compiling
- The Input File
- Problem Generators
- Boundary Conditions
- Coordinate Systems and Meshes
- Running the Code
- Outputs
- Using MPI and OpenMP
- Static Mesh Refinement
- Adaptive Mesh Refinement
- Load Balancing
- Special Relativity
- General Relativity
- Passive Scalars
- Shearing Box
- Diffusion Processes
- General Equation of State
- FFT
- High-Order Methods
- Super-Time-Stepping
- Orbital Advection
- Rotating System
- Reading Data from External Files
Programmer Guide