Skip to content
Jonah Miller edited this page Jun 20, 2024 · 12 revisions

Welcome to the nubhlight wiki!

Installing dependencies for nubhlight on an Debian-based system

Tested on Ubuntu 18

sudo apt install g++ build-essential # compilers
sudo apt install gsl-bin libgsl-dev # gnu scientific library
sudo apt install libopenmpi-dev # MPI
sudo apt install libhdf5-openmpi-dev # parallel HDF5

Getting started with analysis

Open up Python in your output directory and do the following:

sys.path.append('/path/to/nubhlight/scripts')
sys.path.append('/path/to/nubhlight/scripts/analysis')
import hdf5_to_dict as io
hdr = io.load_hdr("dump_0000000.h5")  # loads dump header
geom = io.load_geom(hdr) # loads geometry/grid info
dump = io.load_dump('dump_file_name.h5',geom=geom)

Units

hdr['X_unit'] converts the quantity X from code units to cgs. For example hdr[RHO_unit'] converts density to cgs.

Items in the geom object:

gcov (192, 128, 4, 4)   : covariant metric (Kerr-Schild metric, HARM coordanates)
gcon (192, 128, 4, 4)   : contravariant metric (-- // --)
gdet (192, 128)         : \sqrt{-g}
alpha (192, 128, 66)    : lapse (extra dimension for plotting)
X1 (192, 128, 66)       : {X1(0:log(R)), X2(0:1), X3(0:tau)} -- code coorinates
X2 (192, 128, 66)
X3 (192, 128, 66)
x (192, 128, 66)        : {x, y, z} -- "corotating" Cartesian Kerr-Schild coordinates
y (192, 128, 66) 
z (192, 128, 66)
X1f (193, 129, 67)      : {X1f, X2f, X3f} -- same for the positions of grid nodes
X2f (193, 129, 67)
X3f (193, 129, 67)
xf (193, 129, 67)       : {xf, yf, zf} -- "corotating" Cartesian KS for grid nodes  
yf (193, 129, 67)
zf (193, 129, 67)
Lambda_h2cart_con (192, 128, 66, 4, 4) : transformation matrix \Lambda: HARM -> Cartesian
Lambda_h2cart_cov (192, 128, 66, 4, 4)
r (192, 128, 66)                       : {r, th, phi} -- corotating spherical Kerr-Schild
th (192, 128, 66)
phi (192, 128, 66)
rcyl (192, 128, 66)                    : cylindrical radius
Lambda_h2bl_con (192, 128, 4, 4)       : transformation: HARM -> Boyer-Lindquist
Lambda_h2bl_cov (192, 128, 4, 4)
Lambda_bl2cart_con (192, 128, 66, 4, 4) : transformation: Boyer-Lindquist -> Cartesian
Lambda_bl2cart_cov (192, 128, 66, 4, 4)
Lambda_h2bl_con_3d (192, 128, 66, 4, 4) : transformation: HARM -> Boyer-Lindquist
Lambda_h2bl_cov_3d (192, 128, 66, 4, 4)

Items in the dump object

hdr             : header
geom            : geometry object (see above)
t               : time in geomtric units of the output [GM/c^3]
dump_cnt        : index of the dump (dump count)
RHO             : density [geom]
UU              : internal energy density [geom?]
U1, U2, U3      : u^k/u^0  == dx^i/dt
B1, B2, B3      : magnetic field
Ye              : electron fraction
Ye_em           : -- // -- assuming no absorption happened
ATM             : 0..1: artificial atmosphere at the beginning of the simulation
jcon            : contravariant 4-current
divb            : divergence of the magnetic field: div(B)
fail_save       : records if fix-ups happened in this cell
Rmunu           : radiation energy-momentum
Nsph            : # of superphotons (= MC packets in a given cell)
nph             : number density of neutrinos in a cell
nuLnu           : \nu L_\nu (outgoing luminous flux of nus) binned in angles and energies
Jrad            : emissivity
Nem             : # of MC packets emitted
Nabs            : -- // -- absorbed
Nsc             : -- // -- scattered
radG_int        : source term of the fluid equations due to radiation, time-averaged between the dumps
tau_cool        : cooling time (neutrino cooling timescale)
dtau_avg        : optical depth experienced by an average nu in a cell over 1 time step
dtau_scatt      : -- // -- for scattering
dtau_tot        : -- // -- for both absorption and scattering
Nem_phys        : # of nu emitted 
Nabs_phys       : # of nu absorbed?
PRESS           : pressure [erg/cm3 = dyne/cm2]
TEMP            : temperature log10([MeV])
ENT             : entropy [k_B/baryon]
Theta           : dimensionless temperature / electron temperature (kb T / m_e c^2)
THETA           : ?
Thetae          : ?
tau_heat        : heating timescale
Qrad            : number of packets emitted per cooling time? (should worry if <10)
Nem_e           : alias to the neutrino component in the emitted neutrinos array Nem_phys
Nem_anti        : -- // -- for anti-neutrinos in Nem_phys
Nem_x           : -- // -- for heavy neutrinos in Nem_phys
Nabs_e          : alias to the neutrino component in the absorbed neutrinos array Nabs_phys 
Nabs_anti       : -- // -- for anti-neutrinos in Nabs_phys
Nabs_x          : -- // -- for heavy neutrinos in Nabs_phys
ucon, ucov      : 4-velocity in HARM coordinates: contra-/covariant components
bcon, bcov      : 4-vector of magnetic field: contra-/covariant components
bsq             : square of the magnetic field
beta            : gas pressure / magnetic pressure
ut,uX1,uX2,uX3  : components of ucon[]
jcov            : covariant current
j2              : square of the current
ur              : radiation pressure
betar           : gas pressure / radiation pressure
Jem             : = Jrad[0] - emission?
Jabs            : = Jrad[1] - absorption?
Jsc             : = Jrad[2] + Jrad[3] - scattering?
dtau_abs        
dtau_dens
dlepton_rad
dyedt_rad
ucon_bl         : BL = KS in corotating spherical coordinates (producing spherical horizon shape, so horizon is given by R = const)
ucov_bl
bcon_bl
bcov_bl
ucon_cart
ucov_cart
bcon_cart
bcov_cart

Coordinate system

nubhlight uses a complex coordinate system, called harm coordinates in the code. These are $X_1$, $X_2$ and $X_3$. They are defined by the transformation to spherical coordinates $r, \theta, \phi$ as: $$r = e^{X_1}$$ $$\theta = G(X_2) + f e^{s(\min(X_1) - X_1)} (J(X_2) - G(X_2))$$ $$\phi = X_3$$ where here $f$ is either 0 or 1, depending on whether or not derefine_poles is enabled (it is 1 when enabled), $s$ is mks_smooth in the code, and $$G(X_2) = \pi X_2 + \frac{1-h}{2} \sin(2\pi X_2)$$ $$J(X_2) = n \tilde{y} \frac{1 + (\tilde{y}/l)^\alpha}{\alpha + 1} + \frac{\pi}{2}$$ for $$\tilde{y} = 2 X_2 - 1$$ and constants $n$, $l$, and $\alpha$ ($n$ is poly_norm, $l$ is poly_xt, and $\alpha$ is poly_alpha in the code) such that $X_2$ ranges from 0 to 1 with 0 corresponding to the north pole and 1 the south pole. $n$ is defined as: $$n = 0.5 * \pi * \frac{1}{1 + 1/[(\alpha + 1) l^\alpha]}$$ or expressed in code:

double poly_norm = 1. / (1. + 1. / (poly_alpha + 1.) * 1. / pow(poly_xt, poly_alpha));

This transformation produces a resolution enhancement around the equator and a de-enhancement around the poles.

If you are converting from code coordinates to Cartesian, first convert to spherical via the above rules, then to Cartesian.

If you are converting from Cartesian coordinates to code, go the the other way. But to compute $X_2$ from $\theta$ you may need to perform an algebraic root find via, e.g., Newton-Raphson solve.