Skip to content

HAFS Online Mini Tutorial

Gillian Petro edited this page Dec 3, 2024 · 4 revisions

Welcome to HAFS online mini tutorial

This tutorial serves as a guide to learning how to use the Hurricane and Forecast System (HAFS) through a series of practical tests that explore the various capabilities of the model.

Prerequisites and setting environment

The examples in this tutorial are tailored be run on the Mississippi State University's High Performance Computing System (MSU-HPC). The MSU-HPC consist of two components, Orion and Hercules that share a InfiniBand interconnect and two Lustre file systems, "/work/" and "/work2/". In this tutorial, we will use Hercules.

1. MSU account to access to MSU-HPC

To get a MSU-HPC account, contact your project's Account Manager to submit a request. Once the account is fully set up, you can log in to the MSU-HPC. To log in to Orion or Hercules via SSH, you will have to use your MSU account username, MSU password, and Duo two-factor authentication.

2. MSU login

Hercules/Orion login nodes are available externally via SSH:

  • Windows users can use one of the well-known clients such as Putty or Windows PowerShell

3. Setting up proper environment

After logging into Hercules/Orion, the next step is to set up the proper shell environment. To do so, you can edit and source ~/.bashrc:

$ vi ~/.bashrc

module use /apps/contrib/modulefiles
module load rocoto

$ source ~/.bashrc

Downloading, building and installing HAFS

This section provides the instructions to obtain the source code from the HAFS Github repository, build and install HAFS, and configure the model to run it. The guidelines provided here to download, build and install HAFS are customized for this tutorial. For more detailed directions on downloading any other branch from the HAFS Github repository, please refer to the HAFS User's Guide.

1. Creating directory

Before cloning the HAFS repository, you will need to create a directory to copy the source code and run HAFS to keep the tutorial tests separate from other work:

$ mkdir -p /path-of-hafs-directory/
$ cd /path-of-hafs-directory/

2. Cloning the HAFS Github repository

Once you are in the directory where you want to copy the source code, the next step is to clone the source code from the HAFS Github repository using the following command:

$ git clone -b <BRANCH> --recursive https://github.com/hafs-community/HAFS.git ./

The BRANCH for this tutorial is feature/hafs-ncas-m.
Simply set the -b <BRANCH> option to the branch name.

For example: $ git clone -b feature/hafs-ncas-m --recursive https://github.com/hafs-community/HAFS.git ./

This HAFS online mini tutorial is based on the HFIP NCAS-M HAFS Summer Colloquium. More information about the HFIP NCAS-M HAFS Summer Colloquium can be found on the HFIP webpage.

3. Building and installing HAFS

After cloning and checking out the tutorial branch, go to the sorc directory to build and install HAFS:

$ cd /path-of-hafs-directory/sorc
$ ./install_hafs.sh > install_hafs.log 2>&1

This step could take ~30 mins or longer. If you got any errors during this step, look into the /sorc/logs directory.

The install_hafs.sh script conducts the following steps:

  • Runs ./build_all.sh to build executables for all HAFS subcomponents and tools
    • Runs build_forecast.sh, build_post.sh, build_tracker.sh, build_utils.sh, build_tools.sh, build_gsi.sh, build_hycom_utils.sh, build_ww3_utils.sh
    • The above build_*.sh scripts can also run individually to build the specific subcomponent only
  • Runs ./install_all.sh to copy and install all executables
  • Runs ./link_fix.sh to link fix directories and files
    • Copies platform specific system.conf file

4. Configuring and running HAFS

After successfully installing HAFS, check and edit HAFS system.conf, and ensure that it has the appropriate configuration:

$ cd /path-of-hafs-directory/parm
$ vi system.conf

For example:

disk_project=hwrf
tape_project=emc-hwrf
cpu_account=hurricane
archive=disk:/work2/noaa/{disk_project}/tutorial/noscrub/{ENV[USER]}/hafsarch/{SUBEXPT}/{out_prefix}.tar
CDSAVE=/work2/noaa/{disk_project}/tutorial/save/{ENV[USER]}
CDNOSCRUB=/work2/noaa/{disk_project}/tutorial/noscrub/{ENV[USER]}/hafstrak
CDSCRUB=/work2/noaa/{disk_project}/tutorial/scrub/{ENV[USER]}
syndat=/work/noaa/hwrf/noscrub/input/SYNDAT-PLUS

Note: This example uses the EMC hurricane team configuration. You may need to modify the settings to suite your preferences.

For reference:
disk_project: Project name for disk space
tape_project: HPSS project name
cpu_account: CPU account name for submitting jobs to the batch system (may be the same as disk_project)
archive=disk: Archive location (make sure you have write permission)
CDSAVE: HAFS parent directory
CDNOSCRUB: Track files will be copied to this location — contents will not be scrubbed (user must have write permission)
CDSCRUB: Scrubbed directory for large work files. If scrub is set to yes, this directory will be removed (user must have write permission)

To run HAFS, go to the /rocoto directory to modify the workflow driver (e.g. cronjob_hafs.sh). Make sure HOMEhafs is correct.

$ cd /path-of-hafs-directory/rocoto
$ vi cronjob_hafs.sh

After modifying the script, you can repeat running the driver periodically, or add it as a cron task to advance the workflow.

  1. Manually:
    $ ./cronjob_hafs.sh

  2. Set up a cron task to run periodically:
    $ crontab -e

    */5 * * * * /path-of-hafs-directory/rocoto/cronjob_hafs.sh

To set up the cron task you need to log in to login-node-1. This can be done with the following command:
$ ssh hercules-login-1

5. Input Data

Users will need the datasets listed below as input to HAFS system:

  • TCVital file for TC initial position and intensity info
  • GFS gridded and data for HAFS initialization
  • RTOFS data for ocean model initialization
  • GDAS forecast output files at 003, 006, 009 forecast hours in NETCDF format if GSI is turned on
  • EnKF 80 ensemble member forecasts at 3, 6, 9 hours if GSI is turned on
  • Observational data in prepBUFR format if GSI is turned on

Users need to stage input datasets on disk to work on the tutorial. A case from Hurricane Ida (2021), initialized on August 27, 2021, 12 UTC, is used for this online tutorial and the necessary input datasets are available on Orion/Hercules supercomputer. PLEASE DO NOT COPY to your working directory. The tutorial will provide instructions on how to configure the input data location on disk.

These datasets must be stored following a specific directory structure in a disk location accessible by the HAFS scripts. This directory structure is listed below:

  • SYNDAT-PLUS/ --> Contains TC Vitals
  • gfs.{YYYYMMDD}/{HH}/atmos --> GFS initial and forecast times files in netcdf/grib2 format (0h) and grib2 format 6-129 h
  • rtofs.{YYYYMMDD}/ --> RTOFS initial and forecast times files
  • gdas.{YYYYMMDD}/{HH}/atmos/ --> Previous 6 h GDAS 3-, 6- and 9 h forecast
  • enkfgdas.{YYYYMMDD}/{HH}/atmos/mem{ENS}/ --> Previous 6 h GDAS 3-, 6- and 9 h forecast, ENS=001,002, ... 080

The default input locations are specified in ./parm/system.conf:

  • syndat=/work/noaa/hwrf/noscrub/input/SYNDAT-PLUS
  • COMgfs/work/noaa/hwrf/noscrub/hafs-input/COMGFSv16
  • COMrtofs=/work/noaa/hwrf/noscrub/hafs-input/COMRTOFSv2

Users need to change these directories if they provide their own input files.

Online Tutorial Test Cases

Test case (Hurricane Ida 09L 2021)

Hurricane Ida made landfall near Port Fourchon, Louisiana, on August 29, 2021, as a Category 4 hurricane with sustained winds of 150 mph. Ida is one of the costliest Atlantic hurricanes in the United States, having caused at least $75.25 billion (2021 USD) in damages. A total of 107 deaths were attributed to Ida, including 87 in the USA and 20 in Venezuela. In this tutorial, you will generate a 12-h forecast for cycle 20210827 12Z of Hurricane Ida 09L 2021, using different HAFS configurations.

Pictures/Picture1.png Figure (a) Best track positions for Hurricane Ida, 26 August–1 September 2021. (b) Selected wind observations and best track maximum sustained surface wind speed curve for Hurricane Ida, 26 August–1 September 2021. (c) NWS WSR-88D radar reflectivity of Ida making landfall at 1658 UTC 29 August. Courtesy of NWS WFO Lake Charles. Source: NHC report

Test 0: Regional standalone 6km resolution storm-focused configuration

This initial test will be conducted using a regional standalone, 6-km resolution storm-focused configuration with the Extended Schmidt Gnomonic (ESG) grid, without a moving nest. The run utilizes GFS (Global Forecast System) 0.25 degree resolution grib2 format input data for atmospheric initial and lateral boundary conditions, and is a cold start run from GFS. The test uses an atmospheric only configuration, with the GFDL MP scheme, and excludes vortex initialization, data assimilation and ocean coupling. This test will use the default workflow, without DA/coupling, which includes the following tasks:

graph LR;
    launch-->atm_prep;
    atm_prep-->atm_ic & atm_lbc;
    atm_ic & atm_lbc-->forecast;
    forecast-->atm_post & product;
    atm_post & product-->output;
    output-->archive;
    archive-->launch;
Loading

You can find detailed information about the HAFS workflow here and the practical session here.

Checking the cronjob driver

To run HAFS with the configuration described above we are going to use the cronjob driver cronjob_hafs_tutorial_test0.sh. To do so you will first have to change into the rocoto directory where the cronjob driver is located, and check that the script has the right information, making sure that the HOMEhafs dir is correct.

$ cd /path-of-hafs-directory/rocoto
$ vi cronjob_hafs_tutorial_test0.sh

Picture3

As shown in the example above, the cronjob driver establishes the cycle and storm to be run and configuration.

Reviewing the configuration

Before running the cronjob driver, review the configuration of the run, to make sure it matches the intended configuration. To do so, change into the parm directory and check the configuration file defined in the cronjob driver hafs_regional_atm.conf.

[config] [grid] [forecast] # GFDL MP related options
run_atm_mvnest=no
run_wave=no
run_ocean=no
ocean_model=hycom

run_atm_init=no
run_atm_init_fgat=no
run_atm_init_ens=no
run_atm_merge=no
run_atm_merge_fgat=no
run_atm_merge_ens=no
run_atm_vi=no
run_atm_vi_fgat=no
run_atm_vi_ens=no
run_gsi=no
gsi_d01=no
gsi_d02=no
run_analysis_merge=no
run_analysis_merge_ens=no
run_fgat=no
run_envar=no
run_ensda=no
run_enkf=no

ictype=gfsgrib2ab_0p25
CASE=C512
gtype=regional
gridfixdir=/let/hafs_grid/generate/grid
stretch_fac=1.0001
target_lon={domlon}
target_lat={domlat}
nest_grids=1
parent_grid_num=1
parent_tile=6
refine_ratio=3
istart_nest=33
jstart_nest=153
iend_nest=992
jend_nest=872
regional_esg=yes
idim_nest=1440
jdim_nest=1080
delx_nest=0.03
dely_nest=0.03
halop2=5
pazi=-180

all_tasks=720
atm_tasks=720
ocn_tasks=60
wav_tasks=60

dt_atmos=90
k_split=2
n_split=5
layoutx=30
layouty=20
npx=1441
npy=1081

is_moving_nest=.false.

ccpp_suite_regional=FV3_HAFS_v1_gfdlmp_tedmf
ccpp_suite_glob=FV3_HAFS_v1_gfdlmp_tedmf
ccpp_suite_nest=FV3_HAFS_v1_gfdlmp_tedmf

nstf_n1=2
nstf_n2=1
nstf_n3=0
nstf_n4=0
nstf_n5=0
imp_physics=11
iovr=1
dt_inner=45
dnats=1
do_sat_adj=.true.
lgfdlmprad=.true.

restart_interval="240"

quilting=.true.
write_groups=2
write_tasks_per_group=60

output_grid=regional_latlon
output_grid_cen_lon={domlon}
output_grid_cen_lat={domlat}
output_grid_lon_span=109.5
output_grid_lat_span=81.6
output_grid_dlon=0.06
output_grid_dlat=0.6

Note:

CASE: FV3 resolution
LEVS: model vertical levels
gtype: grid type: uniform, stretch, nest, or regional
stretch_fac: stretching factor for the grid
target_lon: center longitude of the highest resolution tile
target_lat: center latitude of the highest resolution tile

Running the cronjob driver

After reviewing the configuration, it is time to run the driver. To do so, go to the rocoto directory and execute the driver.

  1. Manually:
    $ ./cronjob_hafs_tutorial_test0.sh

  2. Set up a cron task to run periodically:
    On Hercules, cron is only available on the hercules-login-1 node.

    $ ssh hercules-login-1
    $ crontab -e
    

    */4 * * * * /bin/sh -l -c /path-of-hafs-directory/rocoto/cronjob_hafs_tutorial_test0.sh >> /path-of-hafs-directory/rocoto/cronjob_hafs_tutorial_test0.log 2>&1

To check the rocoto workflow related files, after running the driver, use the following command to list the files for experiment hafs_202405_tutorial_test0 for storm 09L and cycle 2021082712:

$ ls -1 hafs-hafs_202405_tutorial_test0-09L-2021082712*

The following files should be generated:

hafs-hafs_202405_tutorial_test0-09L-2021082712.db
hafs-hafs_202405_tutorial_test0-09L-2021082712.db.bak
hafs-hafs_202405_tutorial_test0-09L-2021082712.xml
hafs-hafs_202405_tutorial_test0-09L-2021082712_lock.db

The cronjob driver should also have submitted the launch or other jobs/tasks.

Checking and monitoring the Rocoto workflow

You can check the Slurm jobs through "squeue":

$ squeue -u ${USER}

You can use Rocoto commands to check the status and rewind the Rocoto worflow taks. For example:

$ rocotostat -w hafs-hafs_202405_tutorial_test0-09L-2021082712.xml -d hafs-hafs_202405_tutorial_test0-09L-2021082712.db

Picture5

For more information about Rocoto:

You can also check the status of the Rocoto workflow and rewind failed tasks with rocoto/rocoto_util.sh:

$ ./rocoto_utils.sh [-a | -f | -r | -s]
  • -a: check all active (SUBMITTING|QUEUED|RUNNING|DEAD|UNKNOWN|FAILED|UNAVAILABLE) tasks
  • -f: check all failed (DEAD|UNKNOWN|FAILED|UNAVAILABLE) tasks
  • -r: check and rewind all failed (DEAD|UNKNOWN|FAILED|UNAVAILABLE) tasks
  • -s: check status for all tasks

You can use the script for a specific workflow as follows:

  • Check active tasks for a specific workflow:
    • $ ./rocoto_util.sh -a hafs-hafs_202405_tutorial_test0-09L-2021082712
  • Check task status for a specific workflow:
    • $ ./rocoto_util.sh -s hafs-hafs_202405_tutorial_test0-09L-2021082712
  • Rewind failed tasks for a specific workflow:
    • $ ./rocoto_util.sh -r hafs-hafs_202405_tutorial_test0-09L-2021082712

Alternatively, you can check active tasks, all task status and rewind all failed tasks for all workflows under current directory as follows:

  • $ ./rocoto_util.sh -a
  • $ ./rocoto_util.sh -s
  • $ ./rocoto_util.sh -r

Checking workflow running directories and job logs

Check Job logs

When running HAFS, it might be necessary to check the jobs log files. Typically the log files are located in the WORKhafs directory with the following naming convention:

WORKhafs=/CDSCRUB_directory/experiment_name/cycle_date_and_time/storm_id

To find the directory:

$ cat /path-of-hafs-directory/parm/system.conf

Search for CDSCRUB and list files:

$ ls /CDSCRUB_directory/experiment_name/cycle_date_and_time/storm_id

For example, in Hercules, when running experiment hafs_202405_tutorial_test0 for Ida (09L) for cycle 2021082712 in directory:

HOMEhafs=/work2/noaa/hwrf/tutorial/save/${USER}/hafs_202405

The log files are located in the directory:

WORKhafs=/work2/noaa/hwrf/tutorial/scrub/${USER}/hafs_202405_tutorial_test0/2021082712/09L

To list the log files:

$ ls /work2/noaa/hwrf/tutorial/scrub/${USER}/hafs_202405_tutorial_test0/2021082712/09L

Picture6

Check workflow products under COMhafs

The workflow products are located under the COMhafs directory with the following naming convention:

COMhafs=/CDSCRUB_directory/com/experiment_name/cycle_date_and_time/storm_id

To find the directory:

$ cat /path-of-hafs-directory/parm/system.conf

Search for CDSCRUB:

$ ls /CDSCRUB_directory/experiment_name/cycle_date_and_time/storm_id

For example, in Hercules when running experiment hafs_202405_tutorial_test0 for Ida (09L) for cycle 2021082712 in directory:

HOMEhafs=/work2/noaa/hwrf/tutorial/save/${USER}/hafs_202405

The workflow products are located in the directory:

COMhafs=/work2/noaa/hwrf/tutorial/scrub/${USER}/hafs_202405_tutorial_test0/com/2021082712/09L
$ ls /work2/noaa/hwrf/tutorial/scrub/${USER}/hafs_202405_tutorial_test0/com/2021082712/09L

Picture7

Viewing results using NCVIEW

After successfully running HAFS, check forecast job log and results, and use ncview to view forecast history outputs (atmfhhh.nc,sfcfhhh.nc, e.g.).

Load the necessary modules:

$ module load intel-oneapi-compilers/2024.1.0
$ module load ncview
$ module list

Change into the directory with the results:

$ cd /CDSCRUB_directory/experiment_name/cycle_date_and_time/storm_id/forecast

To check the forecast job log, you can use the following command:

$ vi forecast.log

To use ncview to view forecast output you can use the following command:

$ ncview NETCDF_FORMAT_FILE

For example:

$ ncview atmf012.nc
$ ncview sfcf012.nc

This will show the 12-h forecast results.

Test 1: Regional storm-focused moving-nesting configuration

The test described in this section is similar to Test 0, but using a moving nest. This test uses a 12-km resolution parent domain with a 4-km resolution storm-following moving nest. For initial conditions (IC), the test will use the GFS NetCDF format analysis, and the 0.25 degree resolution grib2 format forecast for lateral boundary conditions (LBC). The test is a cold start run from GFS, and vortex initialization or data assimilation will not be used. The test uses an atmospheric-only configuration, with no ocean coupling, and employs a HAFSv2A-like NATL physics suite (with Thompson microphysics). The workflow is similar to the workflow for Test 0, with the addition of the atm_prep_mvnest job.

graph LR;
    launch-->A["atm_prep(_mvnest)"];
    A["atm_prep(_mvnest)"] -->atm_ic & atm_lbc;
    atm_ic & atm_lbc-->forecast;
    forecast-->atm_post & product;
    atm_post & product-->output;
    output-->archive;
    archive-->launch;
Loading

You can find more information about HAFS dynamics and moving nests here and the practical session here.

Checking the cronjob driver

To run HAFS with the configuration described above we are going to use the cronjob driver cronjob_hafs_tutorial_test1.sh. To do so you will first have to change into the rocoto directory where the cronjob driver is located, and check that the script has the right information, making sure that the HOMEhafs dir is correct.

$ cd /path-of-hafs-directory/rocoto
$ vi cronjob_hafs_tutorial_test1.sh

Reviewing the configuration

Before running the cronjob driver, we are going to review the configuration of the run, to make sure it matches the intended configuration. To do so, we change into the parm directory and check the configuration file defined in the cronjob driver hafs_tutorial_test1.conf.

[grid] [grid_mvnest1rest] [forecast]
CASE=C512
LEVS=82
gtype=regional
stretch_fac=1.0001
target_lon={domlon}
target_lat={domlat}
nest_grids=2
parent_grid_num=1,2
parent_tile=6,7
refine_ratio=3,3

istart_nest=313,-999
jstart_nest=313,-999
iend_nest=712,-999
jend_nest=712,-999

regional_esg=yes
idim_nest=600,300
jdim_nest=600,300
delx_nest=0.06,0.02
dely_nest=0.06,0.02
CASE_mvnest1res=C512
LEVS_mvnest1res={grid/LEVS}
gtype_mvnest1res={grid/gtype}
stretch_fac_mvnest1res={grid/stretch_fac}
target_lon_mvnest1res={grid/target_lon}
target_lat_mvnest1res={grid/target_lat}
nest_grids_mvnest1rest=1
parent_grid_num_mvnest1res=1
parent_tile_mvnest1res=6
refine_ratio_mvnest1res=9
istart_nest_mvnest1res=313
jstart_nest_mvnest1res=313
iend_nest_mvnest1res=712
jend_nest_mvnest1res=712
regional_esg_mvnest1res={grid/regional_esg}
idim_nest_mvnest1res=1800
jdim_nest_mvnest1res=1800
delx_nest_mvnest1res=0.02
dely_nest_mvnest1res=0.02
all_tasks=540
atm_tasks=540
ocn_tasks=60
wav_tasks=60

dt_atmos=90
npx=601,301
npy=601,301
k_split=2,4
n_split=5,6
layoutx=12,12
layouty=20,20
ntrack=0,4

cpp_suite_regional=FV3_HAFS_v1_thompson
ccpp_suite_glob=FV3_HAFS_v1_thompson
ccpp_suite_nest=FV3_HAFS_v1_thompson
nstf_n1=2
nstf_n2=0
nstf_n3=0
nstf_n4=0
nstf_n5=0

Note:

Start/end index of the regional/nested domain on the tile's super grid:
istart_nest=CRES-(idim_nest/refine_ratio)+1
iend_nest=CRES+(idim_nest/refine_ratio)
jstart_nest=CRES-(jdim_nest/refine_ratio)+1
jend_nest=CRES+(jdim_nest/refine_ratio)

First column for regional parent, second column for nest:
i/jdim: domain dimension sizes
delx/y: super grid spacing in degree, actual compute grid spacing=2*delx/y

atm_tasks = parent's (layoutx * layouty) + nest's (layoutx * layouty) + (write_groups * write_tasks_per_group)
atm_tasks = 12x20 + 12x20 +3x20 = 540
For uncoupled (atmosphere-only) forecast, all_tasks = atm_tasks

Model physics time step: dt_atmos (same for parent and nest)
Dynamic time step: dt_atmos/k_split (different for parent and nest)
Acoustic time step: dt_atmos/k_split/n_split (different for parent and nest)
Internal tracker time interval: ntrack*dt_atmos

Running the cronjob driver

After reviewing the configuration, it is time to run the driver. To do so change back into the rocoto directory and execute the driver.

  1. Manually:
    $ ./cronjob_hafs_tutorial_test1.sh

  2. Set up a cron task to run periodically:
    On Hercules, cron is only available on the hercules-login-1 node

    $ ssh hercules-login-1
    $ crontab -e
    

    */4 * * * * /bin/sh -l -c /path-of-hafs-directory/rocoto/cronjob_hafs_tutorial_test1.sh >> /path-of-hafs-directory/rocoto/cronjob_hafs_tutorial_test1.log 2>&1

The cronjob driver should also have submitted the launch or other jobs/tasks, including the atm_prep_mvnest task.

Test 2: Regional storm-focused moving-nesting configuration with HFSB-like physics

This test is similar to Test 1, using the same configuration and initial conditions, but with HFSB-like physics.

More information about HAFS physics can be found here and the practical session here.

Checking the cronjob driver

To run HAFS with the configuration described above we are going to use the cronjob driver cronjob_hafs_tutorial_test2.sh. To do so you will first have to change into the Rocoto directory where the cronjob driver is located, and check that the script has the right information, making sure that the HOMEhafs directory is correct.

$ cd /path-of-hafs-directory/rocoto
$ vi cronjob_hafs_tutorial_test2.sh

Reviewing the configuration

Before running the cronjob driver, we are going to review the configuration of the run, to make sure it matches the intended configuration. To do so, we change into the parm directory and check the configuration file defined in the cronjob driver hafs_tutorial_test2.conf

To see the differences between the configuration of Test 1 and Test 2, use the following command to list the differences:

$ diff hafs_tutorial_test1.conf hafs_tutorial_test2.conf

Running the cronjob driver

After reviewing the configuration, it is time to run the driver. To do so change back into the rocoto directory and execute the driver. The cronjob driver should also have submitted the launch or other jobs/tasks.

  1. Manually:
    $ ./cronjob_hafs_tutorial_test2.sh

  2. Set up a cron task to run periodically:
    On Hercules, cron is only available on the hercules-login-1 node

    $ ssh hercules-login-1
    $ crontab -e
    

    */4 * * * * /bin/sh -l -c /path-of-hafs-directory/rocoto/cronjob_hafs_tutorial_test2.sh >> /path-of-hafs-directory/rocoto/cronjob_hafs_tutorial_test2.log 2>&1

Modifying Physics in HAFS

To modify physics in HAFS:

  1. Create a CCPP suite with the schemes you want
  2. Compile the code with this suite
  3. Modify the namelists

The model physics schemes are located under:

$ cd /path-of-hafs-directory/sorc/hafs_forecast.fd/FV3/CCPP/physics/physics
$ ls -la

To look at, for example, PBL scheme codes:

$ cd PBL
$ cd SATMEDMF
$ ls -la

The model CCPP suites are located under:

$ cd /path-of-hafs-directory/sorc/hafs_forecast.fd/FV3/CCPP/suites
$ ls -l

To view and modify the model namelists change into the parm directory:

$ cd /path-of-hafs-directory/parm/forecast
$ cd regional
$ ls

To view and modify, for example namelist input.nml.tmp:
$ vi input.nml.tmp

Test 3: Vortex initialization/Data assimilation (VI/DA) and MOM6 coupling configuration

This practical test explores a coupled configuration with VI and DA. Similar to Test 1, this test uses a moving nest. The parent domain has a 12-km resolution, and the moving nest has a 4-km resolution. This test uses VI and MOM6 ocean coupling, but DA is disabled and one-way wave (WW3) coupling turned off. The workflow now includes tasks atm_init, atm_vi, analysis_merge, ocn_prep and ocn_post.

graph LR;
    launch-->A["atm_prep(_mvnest)"];
    A["atm_prep(_mvnest)"] -->atm_ic & atm_lbc & ocn_prep;
    atm_ic & atm_lbc-->atm_init;
    atm_init-->atm_vi;
    atm_vi-->analysis_merge;
    analysis_merge & ocn_prep-->forecast;
    forecast-->atm_post & ocn_post & product;
    atm_post & product & ocn_post-->output;
    output-->archive;
    archive-->launch;
Loading
  • atm_init: Interpolates GFS analysis into model grid
  • atm_vi: Conducts VI
  • analysis_merge: Merges VI output data into GFS background analysis to prepare the forecast input file
  • ocn_prep: Prepares RTOFs input data for MOM6
  • ocn_post: Generates ocean products

More information on VI can be found here and the practical session here.

Checking the cronjob driver

To run HAFS with the configuration described above we are going to use the cronjob driver cronjob_hafs_tutorial_test3.sh. To do so you will first have to change into the Rocoto directory where the cronjob driver is located, and check that the script has the right information, making sure that the HOMEhafs directory is correct.

$ cd /path-of-hafs-directory/rocoto
$ vi cronjob_hafs_tutorial_test3.sh

Reviewing the configuration

Before running the cronjob driver, we are going to review the configuration of the run, to make sure it matches the intended configuration. To do so, we change into the parm directory and check the configuration file defined in the cronjob driver ../parm/tutorial/hafs_tutorial_test3.conf.

Running the cronjob driver

After reviewing the configuration, it is time to run the driver. To do so change back into the rocoto directory and execute the driver.

  1. Manually:
    $ ./cronjob_hafs_tutorial_test3.sh

  2. Set up a cron task to run periodically:
    On Hercules, cron is only available on the hercules-login-1 node

    $ ssh hercules-login-1
    $ crontab -e
    

    */4 * * * * /bin/sh -l -c /path-of-hafs-directory/rocoto/cronjob_hafs_tutorial_test3.sh >> /path-of-hafs-directory/rocoto/cronjob_hafs_tutorial_test3.log 2>&1

The cronjob driver should also have submitted the launch or other jobs/tasks, including tasks atm_init, atm_vi, ocn_prep, analysis_merge, ocn_post.

Picture10

Plotting VI binary output files

To plot the final VI and GFS analysis field before VI with the following steps:

  1. Copy Copyexec.sh to your atm_vi directory (e.g., /CDSCRUB_directory/experiment_name/cycle_date_and_time/storm_id/atm_vi) from /work2/noaa/hwrf/noscrub/jhshin/VI_Hercules/Copyexec.sh on Hercules.
  2. Run Copyexec.sh to copy the necessary executable files that can plot the VI binary output files
  3. Run the script file Runexec.sh
  4. Use GrADS and ctl files to plot the VI output

Turning on DA

The cronjob driver of Test 3, contains a section that allows to run the test using data assimilation. The workflow will look as following:

graph LR;
    launch-->A["atm_prep(_mvnest)"];
    A["atm_prep(_mvnest)"] -->B["atm_ic(_fgathh)"] & atm_lbc & ocn_prep & obs_prep;
    B["atm_ic(_fgathh)"] & atm_lbc-->C["atm_init(_fgathh)"];
    C["atm_init(_fgathh)"]-->D["atm_vi(_fgathh)"];
    D["atm_vi(_fgathh)"] & obs_prep-->analysis;
    analysis-->analysis_merge;
    analysis_merge & ocn_prep-->forecast;
    forecast-->atm_post & ocn_post & product;
    atm_post & product & ocn_post-->output;
    output-->archive;
    archive-->cleanup;
    cleanup--->|next forecast cycle|launch;
Loading

To enable the data assimilation configuration follow the next steps:

  1. Check and edit cronjob_hafs_tutorial_test3.sh to use ../parm/tutorial/hafs_tutorial_test3a.conf
  2. Review configuration file ../parm/tutorial/hafs_tutorial_test3a.conf
  3. Run the cronjob driver

Now, on top of Test 3, data assimilation through GSI is further turned on. More information on DA with application to HAFS can be found here and the practical session here.

Picture11

HAFS Graphics

HAFS graphics are run for every storm and forecast cycle. For active storms, graphics are available for both operational HFSA and HFSB. You can find more detailed information about HAFS graphics here.

The python scripts used to generate the graphs are located in the following Github repository:
https://github.com/hafs-community/hafs_graphics

To clone the repository in your local directory:

$ mkdir hafs_graphics
$ cd hafs_graphics
$ git clone -b feature/hafs.v2.0.1 https://github.com/hafs-community/hafs_graphics.git .

The graphics that can be generated include:

  • ATCF - To plot track and intensity
  • Atmos - To plot variables like mean sea level pressure, surface temperature, heat fluxes, vertical wind shear
  • Ocean - To plot sea surface temperature, sea surface salinity, ocean heat content

HAFS graphics can be run as part of the HAFS workflow. To do so, turn on emcgraphics by adding config.run_emcgraphics=yes into the cronjob driver, for example:

./run_hafs.py ${opts} 2021082712 09L HISTORY ${confopts} config.run_emcgraphics=yes

It can also be run from the HAFS graphics package, using the following drivers in /path-of-hafs-directory/hafs_graphics/run:

  • jobhafsATCF.sh
  • jobhafsatmos_orion.sh
  • jobhafsocean_orion.sh

Note: Before running the scripts it is necessary to make some modifications and to load graphics.run.hercules module.

These drivers are platform dependent.

Lastly, you can run the Python scripts manually.

  1. Load the Python environment:

    $ module purge
    $ module use /path-of-hafs-graphics/modulefiles
    $ module load graphics.run.hercules
    $ module list
    
  2. Locate the Python code: $ cd /path-of-hafs-graphics/ush/python/

    There are three directories: ATCF, atmos, ocean. Using the ocean directory as an example, it contains:

    • plot_* files: scripts that produce the figures
    • plot_ocean.yml: file that needs to be edited with the location of the HAFS output and storm information
  3. Edit the plot_ocean.yml:

    $ vi plot_ocean.yml
    

    stormModel: HFSA
    stormName: IDA
    stormID: 09L
    stormBasin: AL
    ymdh: '2021082712'
    fhhh: f024
    trackon: 'yes'
    COMhafs: /work2/noaa/hwrf/tutorial/scrub/bliu/hafs_202405_tutorial_test4/com/2021082712/09L
    cartopyDataDir: /work/noaa/hwrf/noscrub/local/share/cartopy

  4. Run a Python script:

    $ ipython
    $ run plot_sst.py
    $ display IDALIA10L.2023082718.HFSA.ocean.sst.f024.png
    
Clone this wiki locally