From 35641eed05ab7e5c97b04baccb8f7b342bbfd0b4 Mon Sep 17 00:00:00 2001 From: Julie Schramm Date: Wed, 12 Jun 2019 15:15:16 -0600 Subject: [PATCH 1/2] Add Chapter 9 and make formatting consistent Add new version of Listing 6.3 and make formatting consistent in Chapter 6 Add reference to Chapter 9 from Chapter 3 Clean up README file Build of html and pdf files successful. --- doc/CCPPtechnical/README | 18 +- doc/CCPPtechnical/source/AddingNewSchemes.rst | 15 +- doc/CCPPtechnical/source/AutoGenPhysCaps.rst | 2 +- .../source/BuildingRunningHostModels.rst | 463 ++++++++++++++++++ .../source/ConfigBuildOptions.rst | 4 +- doc/CCPPtechnical/source/HostSideCoding.rst | 201 ++++---- doc/CCPPtechnical/source/index.rst | 2 +- 7 files changed, 592 insertions(+), 113 deletions(-) create mode 100644 doc/CCPPtechnical/source/BuildingRunningHostModels.rst diff --git a/doc/CCPPtechnical/README b/doc/CCPPtechnical/README index 30617076..9b22993c 100644 --- a/doc/CCPPtechnical/README +++ b/doc/CCPPtechnical/README @@ -1,20 +1,4 @@ -Steps to build and use the Sphinx documentation tool: - -1) Get Sphinx and sphinxcontrib-bibtex installed on your desktop from - http://www.sphinx-doc.org/en/master/usage/installation.html - https://sphinxcontrib-bibtex.readthedocs.io/en/latest/quickstart.html#installation - -2) Create a Sphinx documentation root directory: - % mkdir docs - % cd docs - -3) Initialize your Sphinx project (set up an initial directory structure) using - % sphinx-quickstart - - See http://www.sphinx-doc.org/en/master/usage/quickstart.html or - https://sphinx-rtd-tutorial.readthedocs.io/en/latest/sphinx-quickstart.html - - for help. You can answer (ENTER) to most of the questions. +Steps to build the CCPP Technical Documentation: To build html: diff --git a/doc/CCPPtechnical/source/AddingNewSchemes.rst b/doc/CCPPtechnical/source/AddingNewSchemes.rst index 4168f713..b8a247ab 100644 --- a/doc/CCPPtechnical/source/AddingNewSchemes.rst +++ b/doc/CCPPtechnical/source/AddingNewSchemes.rst @@ -6,11 +6,13 @@ Tips for Adding a New Scheme This chapter contains a brief description on how to add a new scheme to the *CCPP-Physics* pool. -* Identify the variables required for the new scheme and check if they are already available for use in the CCPP by checking the metadata tables in ``GFS_typedefs.F90`` or by perusing file ``ccpp-framework/doc/DevelopersGuide/CCPP_VARIABLES_XYZ.pdf`` generated by ``ccpp_prebuild.py``. +* Identify the variables required for the new scheme and check if they are already available for use in the CCPP by checking the metadata tables in ``GFS_typedefs.F90`` or by perusing file ``ccpp-framework/doc/DevelopersGuide/CCPP_VARIABLES_{FV3,SCM}.pdf`` generated by ``ccpp_prebuild.py``. * If the variables are already available, they can be invoked in the scheme’s metadata table and one can skip the rest of this subsection. If the variable required is not available, consider if it can be calculated from the existing variables in the CCPP. If so, an interstitial scheme (such as ``scheme_pre``; see more in :numref:`Chapter %s `) can be created to calculate the variable. However, the variable must be defined but not initialized in the host model as the memory for this variable must be allocated on the host model side. Instructions for how to add variables to the host model side is described in :numref:`Chapter %s `. - * It is important to note that not all data types are persistent in memory. The interstitial data type is erased every time step and does not persist from one set to another or from one group to another. The diagnostic data type is periodically erased because it is used to accumulate variables for given time intervals. + * It is important to note that not all data types are persistent in memory. Most variables in the interstitial data type are reset (to zero or other initial values) at the beginning of a physics group and do not persist from one set to another or from one group to another. The diagnostic data type is periodically reset because it is used to accumulate variables for given time intervals. However, there is a small subset of interstitial variables that are set at creation time and are not reset; these are typically dimensions used in other interstitial variables. + + .. note:: If the value of a variable must be remembered from one call to the next, it should not be in the interstitial or diagnostic data types. * If information from the previous timestep is needed, it is important to identify if the host model readily provides this information. For example, in the Model for Prediction Across Scales (MPAS), variables containing the values of several quantities in the preceding timesteps are available. When that is not the case, as in the UFS Atmosphere, interstitial schemes are needed to compute these variables. As an example, the reader is referred to the GF convective scheme, which makes use of interstitials to obtain the previous timestep information. @@ -23,7 +25,8 @@ This chapter contains a brief description on how to add a new scheme to the *CCP * ``NEMSfv3gfs/ccpp/config/ccpp_prebuild_config.py`` for the UFS Atmosphere * ``gmtb-scm/ccpp/config/ccpp_prebuild_config.py`` for the SCM -* Add the new scheme to the list of schemes in ``ccpp_prebuild_config.py`` using the same path as the existing schemes: +* Add the new scheme to the Python dictionary in ``ccpp_prebuild_config.py`` using the same path + as the existing schemes: .. code-block:: console @@ -52,15 +55,15 @@ This chapter contains a brief description on how to add a new scheme to the *CCP * ``NEMSfv3gfs/ccpp/suites`` for the UFS Atmosphere * ``gmtb-scm/ccpp/suites`` for the SCM -* Before running, check for consistency between the namelist and the SDF. There is no default consistency check between the SDF and the namelist unless the developer adds one. Errors may result in segment faults in running something you did not intend to run if the arrays are not allocated. +* Before running, check for consistency between the namelist and the SDF. There is no default consistency check between the SDF and the namelist unless the developer adds one. Errors may result in segmentation faults in running something you did not intend to run if the arrays are not allocated. * Test and debug the new scheme: * Typical problems include segment faults related to variables and array allocation. * Make sure SDF and namelist are compatible. Inconsistencies may result in segmentation faults because arrays are not allocated or in unintended scheme(s) being executed. * A scheme called GFS_debug (``GFS_debug.F90``) may be added to the SDF where needed to print state variables and interstitial variables. If needed, edit the scheme beforehand to add new variables that need to be printed. - * Check *prebuild* script. - * Compile code in DEBUG mode, run through debugger if necessary (gdb, Allinea DDT, totalview, ...). + * Check *prebuild* script for success/failure and associated messages. + * Compile code in DEBUG mode, run through debugger if necessary (gdb, Allinea DDT, totalview, ...). See :numref:`Chapter %s ` for information on debugging. * Use memory check utilities such as valgrind. * Double-check the metadata table in your scheme to make sure that the standard names correspond to the correct local variables. diff --git a/doc/CCPPtechnical/source/AutoGenPhysCaps.rst b/doc/CCPPtechnical/source/AutoGenPhysCaps.rst index 3172f28e..d86d58f1 100644 --- a/doc/CCPPtechnical/source/AutoGenPhysCaps.rst +++ b/doc/CCPPtechnical/source/AutoGenPhysCaps.rst @@ -15,7 +15,7 @@ while the host model *caps* are described in :numref:`Chapter %s `). .. _DynamicBuildCaps: diff --git a/doc/CCPPtechnical/source/BuildingRunningHostModels.rst b/doc/CCPPtechnical/source/BuildingRunningHostModels.rst new file mode 100644 index 00000000..5bea7dfd --- /dev/null +++ b/doc/CCPPtechnical/source/BuildingRunningHostModels.rst @@ -0,0 +1,463 @@ +.. _BuildingRunningHostModels: + +**************************************** +Building and Running Host Models +**************************************** + +The following instructions describe how to compile and run the CCPP code with the SCM (:numref:`Section %s `) and with the UFS Atmosphere (:numref:`Section %s `). Instructions are for the *Theia, Jet* and *Cheyenne* computational platforms, with examples on how to run the code on *Theia*. + +.. _SCM: + +SCM +==================== + +One option for a CCPP host model is the SCM. This can be a valuable tool for diagnosing the performance of a physics suite, from validating that schemes have been integrated into a suite correctly to deep dives into how physical processes are being represented by the approximating code. In fact, this SCM likely serves as the simplest example for using the CCPP and its framework in an atmospheric model. + +System Requirements, Libraries, and Tools +-------------------------------------------- + +The source code for the SCM and CCPP component is in the form of programs written in FORTRAN, FORTRAN 90, and C. In addition, the I/O relies on the netCDF libraries. Beyond the standard scripts, the build system relies on the use of the Python scripting language, along with cmake, GNU make and date. + +The basic requirements for building and running the CCPP and SCM bundle are listed below. The versions listed reflect successful tests and there is no guarantee that the code will work with different versions. + + * FORTRAN 90+ compiler versions + * ifort 18.0.1.163 and 19.0.2 + * gfortran 6.2, 8.1, and 9.1 + * pgf90 17.7 and 17.9 + * C compiler versions + * icc v18.0.1.163 and 19.0.2 + * gcc 6.2 and 8.1 + * AppleClang 10.0.0.10001145 + * pgcc 17.7 and 17.9 + * cmake versions 2.8.12.1, 2.8.12.2, and 3.6.2 + * netCDF with HDF5, ZLIB and SZIP versions 4.3.0, 4.4.0, 4.4.1.1, 4.5.0, 4.6.1, and 4.6.3 (not 3.x) + * Python versions 2.7.5, 2.7.9, and 2.7.13 (not 3.x) + * Libxml2 versions 2.2 and 2.9.7 (not 2.9.9) + +Because these tools and libraries are typically the purview of system administrators to install and maintain, they are considered part of the basic system requirements. +Further, there are several utility libraries as part of the NCEPlibs package that must be installed prior to building the SCM. + + * bacio v2.0.1 - Binary I/O library + * sp v2.0.2 - Spectral Transformation Library + * w3nco v2.0.6 - GRIB decoder and encoder library + +These libraries are prebuilt on most NOAA machines using the Intel compiler. For those needing to build the libraries themselves, GMTB recommends using the source code from GitHub at https://github.com/NCAR/NCEPlibs.git, which includes build files for various compilers and machines using OpenMP flags and which are thread-safe. Instructions for installing NCEPlibs are included on the GitHub repository webpage, but for the sake of example, execute the following for obtaining and building from source in ``/usr/local/NCEPlibs`` on a Mac: + +.. code-block:: console + + mkdir /usr/local/NCEPlibs + cd /usr/local/src + git clone https://github.com/NCAR/NCEPlibs.git + cd NCEPlibs + ./make_ncep_libs.sh -s macosx -c gnu -d /usr/local/NCEPlibs -o 1 + +Once NCEPlibs is built, the ``NCEPLIBS_DIR`` environment variable must be set to the location of the installation. For example, if NCEPlibs was installed in ``/usr/local/NCEPlibs``, one would execute + +.. code-block:: console + + export NCEPLIB_DIR=/usr/local/NCEPlibs + +If using *Theia* or *Cheyenne* HPC systems, this environment variable is automatically set to an appropriate installation of NCEPlibs on those machines through use of one of the setup scripts described below. + +Building and Running the SCM +-------------------------------------------- + +Instructions for downloading the code are provided in :numref:`Chapter %s `. Here are the steps to compile and run SCM: + +* Run the CCPP *prebuild* script to match required physics variables with those available from the dycore (SCM) and to generate physics *caps* and ``makefile`` segments. + + .. code-block:: console + + ./ccpp/framework/scripts/ccpp_prebuild.py --config=./ccpp/config/ccpp_prebuild_config.py [ -- debug ] + +* Change directory to the top-level SCM directory. + + .. code-block:: console + + cd scm + +* (Optional) Run the machine setup script if necessary. This script loads compiler modules (Fortran 2003-compliant), netCDF module, etc. and sets compiler environment variables. + + * ``source etc/Theia_setup_intel.csh`` (for csh) or ``. etc/Theia_setup_intel.sh`` (for bash) + * ``source etc/Theia_setup_gnu.csh`` (for csh) or ``. etc/Theia_setup_gnu.sh`` (for bash) + * ``source etc/Theia_setup_pgi.csh`` (for csh) or ``. etc/Theia_setup_pgi.sh`` (for bash) + * ``source etc/Cheyenne_setup_intel.csh`` (for csh) or ``. etc/Cheyenne_setup_intel.sh`` (for bash) + * ``source etc/Cheyenne_setup_gnu.csh`` (for csh) or ``. etc/Cheyenne_setup_gnu.sh`` (for bash) + * ``source etc/Cheyenne_setup_pgi.csh`` (for csh) or ``. etc/Cheyenne_setup_pgi.sh`` (for bash) + * ``source etc/UBUNTU_setup.csh`` (for csh) or ``. etc/UBUNTU_setup.sh`` (for bash) if following the instructions in ``doc/README_UBUNTU.txt`` + * ``source etc/CENTOS_setup.csh`` (for csh) or ``. etc/CENTOS_setup.sh`` (for bash) if following the instructions in ``doc/README_CENTOS.txt`` + * ``source etc/MACOSX_setup.csh`` (for csh) or ``. etc/MACOSX_setup.sh`` (for bash) if following the instructions in ``doc/README_MACOSX.txt`` + +.. note:: If using a local Linux or Mac system, we provide instructions for how to set up your development system (compilers and libraries) in ``doc/README_{MACOSX,UBUNTU,CENTOS}.txt``. If following these, you will need to run the respective setup script listed above. If your computing environment was previously set up to use modern compilers with an associated netCDF installation, it may not be necessary, although we recommend setting environment variables such as ``CC`` and ``FC``. For version 3.0 and above, it is required to have the ``NETCDF`` environment variable set to the path of the netCDF installation that was compiled with the same compiler used in the following steps. Otherwise, the ``cmake`` step will not complete successfully. + +* Make a build directory and change into it. + + .. code-block:: console + + mkdir bin && cd bin + +* Invoke cmake on the source code to build using one of the commands below. + +* Without threading / OpenMP + + .. code-block:: console + + cmake ../src + + * With threading / OpenMP + + .. code-block:: console + + cmake -DOPENMP=ON ../ src + + * Debug mode + + .. code-block:: console + + cmake -DCMAKE_BUILD_TYPE=Debug ../ src + +* If ``cmake`` cannot find ``libxml2`` because it is installed in a non-standard location, add the following to the ``cmake`` command. + + .. code-block:: console + + -DPC_LIBXML_INCLUDEDIR=... + -DPC_LIBXML_LIBDIR=... + +* Compile with ``make`` command. Add ``VERBOSE=1`` to obtain more information on the build process. + + .. code-block:: console + + make + +Note that this will produce executable ``gmtb_scm`` and library ``libccppphys.so.X.Y.Z`` (where X is a major version number; Y is a minor version number, and Z is a patchlevel) and ``libccppphys.so``, which is a link to ``libccppphys.so.X.Y.Z``. The library, which is located in ``ccpp/lib``, will be dynamically linked to the executable at runtime. + +If compilation successfully completes, a working executable named ``gmtb_scm`` will have been created in the ``bin`` directory. + +Although ``make clean`` is not currently implemented, an out-of-source build is used, so all that is required to clean the ``build/run`` directory is (from the ``bin`` directory) + +.. code-block:: console + + pwd #confirm that you are in the build/run directory before deleting files + rm -rfd * + +.. warning:: This command can be dangerous (deletes files without confirming), so make sure that you’re in the right directory before executing! + +There are several test cases provided with this version of the SCM. For all cases, the SCM will go through the time steps, applying forcing and calling the physics defined in the chosen SDF using physics configuration options from an associated namelist. The model is executed through one of two Python run scripts that are pre-staged into the ``bin`` directory: ``run_gmtb_scm.py`` or ``multi_run_gmtb_scm.py``. The former sets up and runs one integration while the latter sets up and runs several integrations serially. + +**Single Run Script Usage** + +Running a case requires three pieces of information: the case to run (consisting of initial conditions, geolocation, forcing data, etc.), the physics suite to use (through a CCPP SDF), and a physics namelist (that specifies configurable physics options to use). Cases are set up via their own namelists in ``../etc/case_config``. A default physics suite is provided as a user-editable variable in the script and default namelists are associated with each physics suite (through ``../src/default_namelists.py``), so, technically, one must only specify a case to run with the SCM. The single run script’s interface is described below. + +.. code-block:: console + + ./run_gmtb_scm.py -c CASE_NAME [-s SUITE_NAME] [-n PHYSICS_NAMELIST_PATH] [-g] + + +When invoking the run script, the only required argument is the name of the case to run. The case name used must match one of the case configuration files located in ``../etc/case_config`` (*without the .nml extension!*). If specifying a suite other than the default, the suite name used must match the value of the suite name in one of the SDFs located in ``../../ccpp/suites`` (Note: not the filename of the SDF). As part of the third CCPP release, the following suite names are valid: + + * SCM_GFS_v15 + * SCM_GFS_v15plus + * SCM_csawmg + * SCM_GSD_v0 + +Note that using the Thompson microphysics scheme (as in ``SCM_GSD_v0``) requires the existence of lookup tables during its initialization phase. As of the release, computation of the lookup tables has been prohibitively slow with this model, so it is highly suggested that they be downloaded and staged to use this scheme (and the ``SCM_GSD_v0`` suite). Pre-computed tables have been created and are available for download at the following URLs: + * https://dtcenter.org/GMTB/freezeH2O.dat (243 M) + * https://dtcenter.org/GMTB/qr_acr_qg.dat (49 M) + * https://dtcenter.org/GMTB/qr_acr_qs.dat (32 M) + +These files should be staged in ``gmtb-scm/scm/data/physics_input_data`` prior to executing the run script. Since binary files can be system-dependent (due to endianness), it is possible that these files will not be read correctly on your system. For reference, the linked files were generated on *Theia* using the Intel v18 compiler. + +Also note that some cases require specified surface fluxes. Special SDFs that correspond to the suites listed above have been created and use the ``*_prescribed_surface`` decoration. It is not necessary to specify this filename decoration when specifying the suite name. If the ``spec_sfc_flux`` variable in the configuration file of the case being run is set to ``.true.``, the run script will automatically use the special SDF that corresponds to the chosen suite from the list above. + +If specifying a namelist other than the default, the value must be an entire filename that exists in ``../../ccpp/physics_namelists``. Caution should be exercised when modifying physics namelists since some redundancy between flags to control some physics parameterizations and scheme entries in the SDFs currently exists. Values of numerical parameters are typically OK to change without fear of inconsistencies. Lastly, the ``-g`` flag can be used to run the executable through the ``gdb`` debugger (assuming it is installed on the system). + +If the run aborts with the error message + +.. code-block:: console + :emphasize-lines: 1,1 + + gmtb_scm: libccppphys.so.X.X.X: cannot open shared object file: No such file or directory + +the environment variable ``LD_LIBRARY_PATH`` must be set to + +.. code-block:: console + + export LD_LIBRARY_PATH=$PWD/ccpp/physics:$LD_LIBRARY_PATH + +before running the model. + +A netCDF output file is generated in the location specified in the case configuration file, if the ``output_dir`` variable exists in that file. Otherwise an output directory is constructed from the case, suite, and namelist used (if different from the default). All output directories are placed in the ``bin`` directory. Any standard netCDF file viewing or analysis tools may be used to examine the output file (ncdump, ncview, NCL, etc). + +**Multiple Run Script Usage** + +A second Python script is provided for automating the execution of multiple integrations through repeated calling of the single run script. From the run directory, one may use this script through the following interface. + +.. code-block:: console + + ./multi_run_gmtb_scm.py {[-c CASE_NAME] [-s SUITE_NAME] [-f PATH_TO_FILE]} [-v{v}] [-t] + +No arguments are required for this script. The ``-c`` or ``--case``, ``-s`` or ``–-suite``, or ``-f`` or ``–-file`` options form a mutually-exclusive group, so exactly one of these is allowed at one time. If ``–c`` is specified with a case name, the script will run a set of integrations for all supported suites (defined in ``../src/supported_suites.py``) for that case. If ``-s`` is specified with a suite name, the script will run a set of integrations for all supported cases (defined in ``../src/supported_cases.py``) for that suite. If ``-f`` is specified with the path to a filename, it will read in lists of cases, suites, and namelists to use from that file. If multiple namelists are specified in the file, there either must be one suite specified or the number of suites must match the number of namelists. If none of the ``-c`` or ``--case``, ``-s`` or ``–-suite``, or ``-f`` or ``–-file`` options group is specified, the script will run through all permutations of supported cases and suites (as defined in the files previously mentioned). + +In addition to the main options, some helper options can also be used with any of those above. The ``-vv`` or ``–-verbose`` option can be used to output more information from the script to the console and to a log file. If this option is not used, only completion progress messages are written out. If ``-v`` is used, the script will write out completion progress messages and all messages and output from the single run script. If ``-vv`` is used, the script will also write out all messages and single run script output to a log file (``multi_run_gmtb_scm.log``) in the ``bin`` directory. The final option, ``-t`` or ``–-timer``, can be used to output the elapsed time for each integration executed by the script. Note that the execution time includes file operations performed by the single run script in addition to the execution of the underlying (Fortran) SCM executable. By default, this option will execute one integration of each subprocess. Since some variability is expected for each model run, if greater precision is required, the number of integrations for timing averaging can be set through an internal script variable. This option can be useful, for example, for getting a rough idea of relative computational expense of different physics suites. + +**Batch Run Script** + +If using the model on HPC resources and significant amounts of processor time is anticipated for the experiments, it will likely be necessary to submit a job through the HPC’s batch system. An example script has been included in the repository for running the model on *Theia*’s batch system (SLURM). It is located in ``gmtb-scm/scm/etc/gmtb_scm_slurm_example.py``. Edit the job_name, account, etc. to suit your needs and copy to the ``bin`` directory. The case name to be run is included in the command variable. To use, invoke + +.. code-block:: console + + ./gmtb_scm_slurm_example.py + +from the ``bin`` directory. + +Additional information on the SCM can be found at https://dtcenter.org/gmtb/users/ccpp/docs/SCM-CCPP-Guide_v3.0.pdf + +.. _UFSAtmo: + +UFS Atmosphere +==================== + +Another option for a CCPP host model is the UFS Atmosphere, located in the umbrella repository NEMSfv3gfs. + +System Requirements, Libraries, and Compilers +--------------------------------------------- +The build system for the UFS with CCPP relies on the use of the Python scripting language, along with ``cmake``. + +The basic requirements for building and running the UFS with CCPP are listed below. The versions listed reflect successful tests and there is no guarantee that the code will work with different versions. + + * FORTRAN 90+ compiler versions + * ifort 15.1.133, 18.0.1.163 and 19.0.2 + * gfortran 6.2, 8.1, and 9.1 + * C compiler versions + * icc v18.0.1.163 and 19.0.2 + * gcc 6.2.0 and 8.1 + * AppleClang 10.0 + * MPI job scheduler versions + * mpt 2.19 + * impi 5.1.1.109 and 5.1.2.150 + * mpich 3.2.1 + * cmake versions 2.8.12.1, 2.8.12.2, and 3.6.2 + * netCDF with HDF5, ZLIB and SZIP versions 4.3.0, 4.4.0, 4.4.1.1, 4.5.0, 4.6.1, and 4.6.3 (not 3.x) + * Python versions 2.7.5, 2.7.9, and 2.7.13 (not 3.x) + +A number of NCEP libraries are required to build and run FV3 and are listed in :numref:`Table %s `. + +.. _NCEP_lib_FV3: + +.. table:: *NCEP libraries required to build the UFS Atmosphere* + + +---------------------------+-------------+----------------------------------------------------+ + | Library | Version | Description | + +===========================+=============+====================================================+ + | bacio | 2.0.1 | NCEP binary I/O library | + +---------------------------+-------------+----------------------------------------------------+ + | ip | 2.0.0/3.0.0 | NCEP general interpolation library | + +---------------------------+-------------+----------------------------------------------------+ + | nemsio | 2.2.3 | NEMS I/O routines | + +---------------------------+-------------+----------------------------------------------------+ + | sp | 2.0.2 | NCEP spectral grid transforms | + +---------------------------+-------------+----------------------------------------------------+ + | w3emc | 2.2.0 | NCEP/EMC library for decoding data in GRIB1 format | + +---------------------------+-------------+----------------------------------------------------+ + | w3nco/v2.0.6 | 2.0.6 | NCEP/NCO library for decoding data in GRIB1 format | + +---------------------------+-------------+----------------------------------------------------+ + +These libraries are prebuilt on most NOAA machines using the Intel compiler. For those needing to build the libraries themselves, GMTB recommends using the source code from GitHub at https://github.com/NCAR/NCEPlibs.git, which includes build files for various compilers and machines using OpenMP flags and which are thread-safe. + +In addition to the NCEP libraries, some additional external libraries are needed (:numref:`Table %s `). + +.. _ext_lib_FV3: + +.. table:: *External libraries necessary to build the UFS Atmosphere* + + +--------------------+-------------------------+---------------------------------------------------------------------------------------------+ + | Library | Version | Description | + +====================+=========================+=============================================================================================+ + | ESMF | V7.1.0r and v8.0.0_bs21 | Earth System Modeling Framework for coupling applications | + +--------------------+-------------------------+---------------------------------------------------------------------------------------------+ + | netCDF | 4.3.0 and 4.6.1 | Interface to data access functions for storing and retrieving data arrays | + +--------------------+-------------------------+---------------------------------------------------------------------------------------------+ + | SIONlib (optional) | v1.7.2 | Parallel I/O library (link) that can be used to read precomputed lookup tables instead of \ | + | | | computing them on the fly (or using traditional Fortran binary data files) | + +--------------------+-------------------------+---------------------------------------------------------------------------------------------+ + +The Earth System Modeling Framework (ESMF), the SIONlib, the NCEPlibs, and the netCDF libraries must be built with the same compiler as the other components of the UFS Atmosphere. + +Building the UFS Atmosphere +--------------------------- + +A complete listing and description of the FV3 build options were discussed in :numref:`Chapter %s ` and are shown in :numref:`Figure %s `. This section will describe the commands needed to build the different options using the script ``compile.sh`` provided in the NEMSfv3gfs distribution. This script calls ``ccpp_prebuild.py``, so users do not need to run the *prebuild* step manually. All builds using ``compile.sh`` are made from the ``./tests`` directory of NEMSfv3gfs and follow the basic command: + +.. code-block:: console + + ./compile.sh $PWD/../FV3 system.compiler 'MAKEOPTS' + +Here, ``system`` stands for the machine on which the code is compiled and can be any of the following machines and compilers: *theia, jet, cheyenne, gaea, stampede, wcoss_cray, wcoss_dell_p3, supermuc_phase2, macosx*, or *linux*. + +``compiler`` stands for the compiler to use and depends on the system. For *theia* and *cheyenne*, the available options are ``intel`` and ``gnu``. For *macosx* and *linux*, the only tested compiler is ``gnu``. For all other platforms, ``intel`` is the only option at this time. + +The ``MAKEOPTS`` string, enclosed in single or double quotes, allows to specify options for compiling the code. The following options are of interest for building the CCPP version of NEMSfv3gfs: + +* **CCPP=Y** - enables :term:`CCPP` (default is ``N``) +* **STATIC=Y** - enables the CCPP static mode; requires ``CCPP=Y`` (default is ``N``) and ``SUITES=...`` (see below) +* **SUITES=XYZ, ABC, DEF, ...** - specify SDF(s) to use when compiling the code in CCPP static mode; SDFs are located in ``ccpp/suites/``, omit the path in the argument; requires ``CCPP=Y STATIC=Y`` (default is ``‘’``) +* **SION=Y** - enables support for the SIONlib I/O library (used by CCPP to read precomputed lookup tables instead of computing them on the fly); available on *Theia, Cheyenne, Jet*; also available on *Mac OS X* and *Linux* if instructions in ``doc/README_{macosx,linux}.txt`` are followed (default is ``N``) +* **32BIT=Y** - compiles FV3 dynamical core in single precision; note that physics are always compiled in double precision; this option is only available on *Theia, Cheyenne*, and *Jet* (default is ``N``) +* **REPRO=Y** - compiles code in :term:`REPRO` mode, i.e. removes certain compiler optimization flags used in the default :term:`PROD` mode to obtain bit-for-bit (b4b) identical results between CCPP and non-CCPP code (default is ``N``) +* **DEBUG=Y** - compiles code in DEBUG mode, i.e. removes all optimization of :term:`PROD` mode and add bound checks; mutually exclusive with ``REPRO=Y`` (default is ``N``) +* **INTEL18=Y** - available on *Theia* and *Jet* only, compiles code with Intel 18 compiler instead of the default Intel 15 compiler (default is ``N``); note that Intel 18 is the only supported compiler on *Cheyenne*. +* **TRANSITION=Y** - applies selective lowering of optimization for selected files to obtain b4b with non-CCPP code in PROD mode (only when using Intel 15 on *Theia*) + +Examples: + +* Compile non-CCPP code with 32-bit dynamics on *Theia* with the Intel compiler + + .. code-block:: console + + ./compile.sh $PWD/../FV3 theia.intel ‘32BIT=Y’ + +* Compile dynamic CCPP code in ``DEBUG`` mode on *Jet* with Intel 18 + + .. code-block:: console + + ./compile.sh $PWD/../FV3 jet.intel ‘CCPP=Y DEBUG=Y INTEL18=Y’ + +* Compile static CCPP code for the CPT suite on *Linux* with the GNU compiler, enable support for the SIONlib I/O library (requires that the library to be installed) + + .. code-block:: console + + ./compile.sh $PWD/../FV3 linux.gnu ‘SION=Y CCPP=Y STATIC=Y SUITES=FV3_CPT_v0’ + +* *Cheyenne* static build with multiple suites: + + .. code-block:: console + + ./compile.sh $PWD/../FV3 cheyenne.intel ‘CCPP=Y STATIC=Y SUITES=FV3_GFS_v15,FV3_CPT_v0’ + + +Running the UFS Atmosphere Using the Regression Tests (RTs) +------------------------------------------------------------ + +Regression testing is the process of testing changes to the programs to make sure that the existing functionalities still work when changes are introduced. By running the RTs (or a subset of them by copying a RT configuration file and editing it), the code is compiled, the run directories are set up, and the code is executed. The results are typically compared against a pre-existing baseline, but in certain occasions it is necessary to first create a new baseline (for example, in a new platform where a baseline does not exist or when it is expected that a new development will change the answer). Because the RTs set up the run directories, this is a useful and easy way to get started, since all the model configuration files and necessary input data (initial conditions, fixed data) are copied into the right place. + +Overview of the RTs +^^^^^^^^^^^^^^^^^^^ + +The RT configuration files are located in ``./tests`` relative to the top-level directory of NEMSfv3gfs and have names ``rt*.conf``. The default RT configuration file, supplied with the NEMSfv3gfs master, compares the results from the non-CCPP code to the *official baseline* and is called ``rt.conf``. Before running the RT script ``rt.sh`` in the same directory, the user has to set one or more environment variables and potentially modify the script to change the location of the automatically created run directories. The environment variables are ``ACCNR`` (mandatory unless the user is a member of the default project *nems*; sets the account to be charged for running the RTs), ``NEMS_COMPILER`` (optional for the ``intel`` compiler option, set to ``gnu`` to switch), and potentially ``RUNDIR_ROOT``. ``RUNDIR_ROOT`` allows the user to specify an alternative location for the RT run directories underneath which directories called ``rt_$PID`` are created (``$PID`` is the process identifier of the ``rt.sh`` invocation). This may be required on systems where the user does not have write permissions in the default run directory tree. + +.. code-block:: console + + export ACCNR=... + export NEMS_COMPILER=intel + export RUNDIR_ROOT=/full/path/under/which/rt_$PID/will/be/created + +Running the full default RT suite defined in ``rt.conf`` using the script ``rt.sh``: + +.. code-block:: console + + ./rt.sh -f + +This command can only be used on a NOAA machine using the Intel compiler, where the output of a non-CCPP build using the default Intel version is compared against the *official baseline*. For information on testing the CCPP code, or using alternate computational platforms, see the following sections. + +This command and all others below produce log output in ``./tests/log_machine.compiler``. These log files contain information on the location of the run directories that can be used as templates for the user. Each ``rt*.conf`` contains one or more compile commands preceding a number of tests. + + +Baselines +^^^^^^^^^^^^^^^^^^^ + +Regression testing is only possible on machines for which baselines exist. EMC maintains *official baselines* of non-CCPP runs on *Jet* and *Wcoss* created with the Intel compiler. GMTB maintains additional baselines on *Theia, Jet, Cheyenne*, and *Gaea*. While GMTB is trying to keep up with changes to the official repositories, baselines maintained by GMTB are not guaranteed to be up-to-date. + +When porting the code to a new machine, it is useful to start by establishing a *personal baseline*. Future runs of the RT can then be compared against the *personal baseline* to ascertain that the results have not been inadvertently affected by code developments. The ``rt.sh -c`` option is used to create a *personal baseline*. + +.. code-block:: console + + ./rt.sh -l rt.conf -c fv3 # create own reg. test baseline + +Once the *personal baseline* has been created, future runs of the RT should be compared against the *personal baseline* using the ``-m`` option. + +.. code-block:: console + + ./rt.sh -l rt.conf -m # compare against own baseline + +The script rt.sh +^^^^^^^^^^^^^^^^^^^ + +``rt.sh`` is a bash shell file to run the RT and has the following options: + +.. code-block:: console + + Usage: $0 -c | -f | -s | -l | -m | -r | -e | -h + -c create new baseline results for + -f run full suite of regression tests + -s run standard suite of regression tests + -l run test specified in + -m compare against new baseline results + -r use Rocoto workflow manager + -e use ecFlow workflow manager + -h display this help + +The location of the run directories and *personal baseline* directories is controlled in ``rt.sh`` on a per-machine basis. The user is strongly advised to NOT modify the path to the *official baseline* directories. + +The *official baseline* directory is defined as: + +.. code-block:: console + + RTPWD=$DISKNM/trunk-yyyymmdd/${COMPILER} # on Cheyenne + RTPWD=$DISKNM/trunk-yyyymmdd # elsewhere + +Note that ``yyyymmdd`` is the year, month and day the RT was created. + +.. warning:: Modifying ``$DISKNM`` will break the RTs! + +*Personal baseline* results (see below) are stored in + +.. code-block:: console + + NEW_BASELINE=${STMP}/${USER}/FV3_RT/REGRESSION_TEST + +and RTs are run in ``$RUNDIR_ROOT``. + +Example: *Theia* + +.. code-block:: console + + ... + dprefix=/scratch4/NCEPDEV + DISKNM=$dprefix/nems/noscrub/emc.nemspara/RT + STMP=$dprefix/stmp4 + PTMP=$dprefix/stmp3 + .. + +In case a user does not have write permissions to ``$STMP (/scratch4/NCEPDEV/stmp4/)``, ``$STMP`` must be modified without modifying ``$DISKNM`` (i.e. ``dprefix``). Similarly, if the user does not have write permissions to ``$PTMP``, the user can set the ``$RUNDIR_ROOT`` environment variable to change the location of the run directories as described below. + +.. code-block:: console + + # Overwrite default RUNDIR_ROOT if environment variable RUNDIR_ROOT is set + RUNDIR_ROOT=${RUNDIR_ROOT:-${PTMP}/${USER}/FV3_RT}/rt_$$ + + +Non-CCPP vs CCPP Tests +^^^^^^^^^^^^^^^^^^^^^^ + +While the official EMC RTs do not execute the CCPP code, GMTB provides RTs to exercise the CCPP in its various modes: ``rt_ccpp_standalone.conf`` tests the CCPP with dynamic build and ``rt_ccpp_static.conf`` tests the CCPP with static build. These tests compare the results of runs done using the CCPP against a previously generated *personal baseline* created without the CCPP by running ``rt_ccpp_ref.conf``. For this comparison, both the non-CCPP *personal baseline* and the tests using the CCPP are performed with code built with the :term:`REPRO` compiler options. + +The command below should be used to create a *personal baseline* using non-CCPP code compiled in :term:`REPRO` mode. + +.. code-block:: console + + ./rt.sh -l rt_ccpp_ref.conf -c fv3 # create own reg. test baseline + +Once the *personal baseline* in REPRO mode has been created, the CCPP tests can be run to compare against it. Use the ``-l`` option to select the test suite and the ``-m`` option to compare against the *personal baseline*. + +.. code-block:: console + + ./rt.sh -l rt_ccpp_standalone.conf -m # dynamic build + ./rt.sh -l rt_ccpp_static.conf -m # static build + + +Compatibility between the Code Base, the SDF, and the Namelist in the UFS Atmosphere +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The variable ``suite_name`` within the ``namelist.input`` file used in the UFS Atmosphere determines which suite will be employed at run time (e.g., ``suite_name=FV3_GFS_v15``). It is the user’s responsibility to ascertain that the other variables in ``namelist.input`` are compatible with the chosen suite. When runs are executed using the RT framework described in the preceding sections, compatibility is assured. For new experiments, users are responsible for modifying the two files (``SDF`` and ``namelist.input``) consistently, since limited checks are in place. + +Information about the UFS Atmosphere physics namelist can be found with the CCPP Scientific Documentation at xxx (**!!!LB: Update for release**). diff --git a/doc/CCPPtechnical/source/ConfigBuildOptions.rst b/doc/CCPPtechnical/source/ConfigBuildOptions.rst index d8ee01b7..9771860d 100644 --- a/doc/CCPPtechnical/source/ConfigBuildOptions.rst +++ b/doc/CCPPtechnical/source/ConfigBuildOptions.rst @@ -5,7 +5,7 @@ CCPP Configuration and Build Options ***************************************** While the *CCPP-Framework* code can be compiled independently, the *CCPP-Physics* code can only be used within a host modeling system that provides the variables and the kind, type, and DDT definitions. As such, it is advisable to integrate the CCPP configuration and build process with the host model’s. Part of the build process, known as the *prebuild* step since it precedes compilation, involves running a Python script that performs multiple functions. These functions include configuring the *CCPP-Physics* for use with the host model and autogenerating FORTRAN code to communicate variables between the physics and the dynamical core. The *prebuild* step will be discussed in detail in :numref:`Chapter %s `. -There are some differences between building and running the SCM and the UFS Atmosphere. In the case of the UFS Atmosphere as the host model, there are several build options (:numref:`Figure %s `). The choice can be specified through command-line options supplied to the ``compile.sh`` script for manual compilation or through a regression test (RT) configuration file. Detailed instructions for building the code are discussed in Chapter 9. +There are some differences between building and running the SCM and the UFS Atmosphere. In the case of the UFS Atmosphere as the host model, there are several build options (:numref:`Figure %s `). The choice can be specified through command-line options supplied to the ``compile.sh`` script for manual compilation or through a regression test (RT) configuration file. Detailed instructions for building the code are discussed in :numref:`Chapter %s `. The relevant options for building CCPP with the UFS Atmosphere can be described as follows: @@ -13,7 +13,7 @@ The relevant options for building CCPP with the UFS Atmosphere can be described .. * **Hybrid CCPP**: The code is compiled with CCPP enabled and allows combining non-CCPP-Physics and CCPP-compliant physics. This is restricted to parameterizations that are termed as “physics” by EMC, i.e. that in a non-CCPP build would be called from ``GFS_physics_driver.F90``. Parameterizations that fall into the categories “time_vary”, “radiation” and “stochastics” have to be CCPP-compliant. The hybrid option is fairly complex and not recommended for users to start with. It is intended as a temporary measure for research and development until all necessary physics are available through the CCPP. This option uses the existing physics calling infrastructure ``GFS_physics_driver.F90`` to call either CCPP-compliant or non-CCPP-compliant schemes within the same run. Note that the *CCPP-Framework* and *CCPP-physics* are dynamically linked to the executable for this option. -* **With CCPP (CCPP=Y)**: The code is compiled with CCPP enabled and restricted to CCPP-compliant physics. That is, any parameterization to be called as part of a suite must be available in CCPP. Physics scheme selection and order is determined at runtime by an external suite definition file (SDF; see :ref:`ConstructingSuite` for further details on the SDF). The existing physics-calling code ``GFS_physics_driver.F90`` and ``GFS_radiation_driver.F90`` are bypassed altogether in this mode and any additional code needed to connect parameterizations within a suite previously contained therein is executed from the so-called CCPP-compliant “interstitial schemes”. One further option determines how the CCPP-compliant physics are called within the host model: +* **With CCPP (CCPP=Y)**: The code is compiled with CCPP enabled and restricted to CCPP-compliant physics. That is, any parameterization to be called as part of a suite must be available in CCPP. Physics scheme selection and order is determined at runtime by an external suite definition file (SDF; see :numref:`Chapter %s ` for further details on the SDF). The existing physics-calling code ``GFS_physics_driver.F90`` and ``GFS_radiation_driver.F90`` are bypassed altogether in this mode and any additional code needed to connect parameterizations within a suite previously contained therein is executed from the so-called CCPP-compliant “interstitial schemes”. One further option determines how the CCPP-compliant physics are called within the host model: * **Dynamic CCPP (STATIC=N)**: This option is recommended for research and development users, since it allows choosing any physics schemes within the CCPP library at runtime by making adjustments to the CCPP SDF and the model namelist. This option carries computational overhead associated with the higher level of flexibility. Note that the *CCPP-Framework* and *CCPP-physics* are dynamically linked to the executable. * **Static CCPP (STATIC=Y)**: The code is compiled with CCPP enabled but restricted to CCPP-compliant physics defined by one or more SDFs used at compile time. This option is recommended for users interested in production-mode and operational applications, since it limits flexibility in favor of runtime performance and memory footprint. Note that the *CCPP-Framework* and *CCPP-physics* are statically linked to the executable. diff --git a/doc/CCPPtechnical/source/HostSideCoding.rst b/doc/CCPPtechnical/source/HostSideCoding.rst index db059e95..5d3f0596 100644 --- a/doc/CCPPtechnical/source/HostSideCoding.rst +++ b/doc/CCPPtechnical/source/HostSideCoding.rst @@ -25,7 +25,7 @@ At present, only two types of variable definitions are supported by the CCPP-Fra Metadata Variable Tables in the Host Model ================================================== -To establish the link between host model variables and physics scheme variables, the host model must provide metadata tables similar to those presented in :numref:`Section %s `. The host model can have multiple metadata tables or just one. For each variable required by the pool of CCPP-Physics schemes, one and only one entry must exist on the host model side. The connection between a variable in the host model and in the physics scheme is made through its standard_name. +To establish the link between host model variables and physics scheme variables, the host model must provide metadata tables similar to those presented in :numref:`Section %s `. The host model can have multiple metadata tables or just one. For each variable required by the pool of CCPP-Physics schemes, one and only one entry must exist on the host model side. The connection between a variable in the host model and in the physics scheme is made through its ``standard_name``. The following requirements must be met when defining variables in the host model metadata tables (see also :ref:`Listing 6.1 ` for examples of host model metadata tables). @@ -100,8 +100,8 @@ While the use of standard Fortran variables is preferred, in the current impleme * ``GFS_cldprop_type`` cloud properties and tendencies needed by radiation from physics * ``GFS_radtend_type`` radiation tendencies needed by physics * ``GFS_diag_type`` fields targeted for diagnostic output to disk -* ``GFS_interstitial_type`` fields used to communicate variables among schemes in the slow physics group required to replace interstitial code in GFS_{physics, radiation}_driver.F90 in CCPP -* ``GFS_data_type`` combined type of all of the above except GFS_control_type and GFS_interstitial_type +* ``GFS_interstitial_type`` fields used to communicate variables among schemes in the slow physics group required to replace interstitial code in ``GFS_{physics, radiation}_driver.F90`` in CCPP +* ``GFS_data_type`` combined type of all of the above except ``GFS_control_type`` and ``GFS_interstitial_type`` * ``CCPP_interstitial_type`` fields used to communicate variables among schemes in the fast physics group The DDT descriptions provide an idea of what physics variables go into which data type. ``GFS_diag_type`` can contain variables that accumulate over a certain amount of time and are then zeroed out. Variables that require persistence from one timestep to another should not be included in the ``GFS_diag_type`` nor the ``GFS_interstitial_type`` DDTs. Similarly, variables that need to be shared between groups cannot be included in the ``GFS_interstitial_type`` DDT. Although this memory management is somewhat arbitrary, new variables provided by the host model or derived in an interstitial scheme should be put in a DDT with other similar variables. @@ -136,7 +136,7 @@ Array slices can be used by physics schemes that only require certain values fro CCPP API ======================================================== -The CCPP Application Programming Interface (API) is comprised of a set of clearly defined methods used to communicate variables between the host model and the physics and to run the physics. The bulk of the CCPP API is located in the CCPP-Framework, and is described in file ccpp_api.F90. Some aspects of the API differ between the dynamic and static build. In particular, subroutines ccpp_physics_init, ccpp_physics_finalize, and ccpp_physics_run (described below) are made public from ccpp_api.F90 for the dynamic build, and are contained in ccpp_static_api.F90 for the static build. Moreover, these subroutines take an additional argument (suite_name) for the static build. File ccpp_static_api.F90 is auto-generated when the script ccpp_prebuild.py is run for the static build. +The CCPP Application Programming Interface (API) is comprised of a set of clearly defined methods used to communicate variables between the host model and the physics and to run the physics. The bulk of the CCPP API is located in the CCPP-Framework, and is described in file ``ccpp_api.F90``. Some aspects of the API differ between the dynamic and static build. In particular, subroutines ``ccpp_physics_init``, ``ccpp_physics_finalize``, and ``ccpp_physics_run`` (described below) are made public from ``ccpp_api.F90`` for the dynamic build, and are contained in ``ccpp_static_api.F90`` for the static build. Moreover, these subroutines take an additional argument (``suite_name``) for the static build. File ``ccpp_static_api.F90`` is auto-generated when the script ``ccpp_prebuild.py`` is run for the static build. .. _DataStructureTransfer: @@ -144,9 +144,9 @@ The CCPP Application Programming Interface (API) is comprised of a set of clearl Data Structure to Transfer Variables between Dynamics and Physics ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, -The roles of cdata structure in dealing with data exchange are not the same between the dynamic and the static builds of the CCPP. For the dynamic build, the cdata structure handles the data exchange between the host model and the physics schemes. cdata is a DDT containing a list of pointers to variables and their metadata and is persistent in memory. +The roles of ``cdata`` structure in dealing with data exchange are not the same between the dynamic and the static builds of the CCPP. For the dynamic build, the ``cdata`` structure handles the data exchange between the host model and the physics schemes. ``cdata`` is a DDT containing a list of pointers to variables and their metadata and is persistent in memory. -For both the dynamic and static builds, the cdata structure is used for holding five variables that must always be available to the physics schemes. These variables are listed in a metadata table in ccpp/framework/src/ccpp_types.F90 (:ref:`Listing 6.2 `). +For both the dynamic and static builds, the ``cdata`` structure is used for holding five variables that must always be available to the physics schemes. These variables are listed in a metadata table in ``ccpp/framework/src/ccpp_types.F90`` (:ref:`Listing 6.2 `). * Error flag for handling in CCPP (``errmsg``). @@ -175,7 +175,7 @@ Two of the variables are mandatory and must be passed to every physics scheme: ` Note that ``cdata`` is not restricted to being a scalar but can be a multidimensional array, depending on the needs of the host model. For example, a model that uses a one-dimensional array of blocks for better cache-reuse may require ``cdata`` to be a one-dimensional array of the same size. Another example of a multi-dimensional array of ``cdata`` is in the SCM, which uses a one-dimensional cdata array for N independent columns. -Due to a restriction in the Fortran language, there are no standard pointers that are generic pointers, such as the C language allows. The CCPP system therefore has an underlying set of pointers in the C language that are used to point to the original data within the host application cap. The user does not see this C data structure, but deals only with the public face of the Fortran cdata DDT. The type ``ccpp_t`` is defined in ``ccpp/framework/src/ccpp_types.F90``. +Due to a restriction in the Fortran language, there are no standard pointers that are generic pointers, such as the C language allows. The CCPP system therefore has an underlying set of pointers in the C language that are used to point to the original data within the host application cap. The user does not see this C data structure, but deals only with the public face of the Fortran ``cdata`` DDT. The type ``ccpp_t`` is defined in ``ccpp/framework/src/ccpp_types.F90``. ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Adding and Retrieving Information from cdata (dynamic build option) @@ -183,15 +183,23 @@ Adding and Retrieving Information from cdata (dynamic build option) Subroutines ``ccpp_field_add`` and ``ccpp_field_get`` are part of the CCPP-Framework and are used (in the dynamic build only) to load and retrieve information to and from ``cdata``. The calls to ``ccpp_field_add`` are auto-generated by the script ``ccpp_prebuild.py`` and inserted onto the host model code via include files (i.e. ``FV3/CCPP_layer/ccpp_fields_slow_physics.inc``) before it is compiled. -A typical call to ``ccpp_field_add`` is below, where the first argument is the instance of ``cdata`` to which the information should be added, the second argument is the standard_name of the variable, the third argument is the corresponding host model variable, the fourth argument is an error flag, the fifth argument is the units of the variable, and the last (optional) argument is the position within ``cdata`` in which the variable is expected to be stored. +A typical call to ``ccpp_field_add`` is below, where the first argument is the instance of ``cdata`` to which the information should be added, the second argument is the ``standard_name`` of the variable, the third argument is the corresponding host model variable, the fourth argument is an error flag, the fifth argument is the units of the variable, and the last (optional) argument is the position within ``cdata`` in which the variable is expected to be stored. .. code-block:: fortran call ccpp_field_add(cdata, 'y_wind_updated_by_physics', GFS_Data(cdata%blk_no)%Stateout%gv0, ierr=ierr, units='m s-1', index=886) +For DDTs, the interface to ``CCPP_field_add`` is slightly different: + +.. code-block:: fortran + + call ccpp_field_add(cdata, 'GFS_cldprop_type_instance', '', c_loc(GFS_Data(cdata%blk_no)%Cldprop), ierr=ierr, index=1) + +where the first argument and second arguments bear the same meaning as in the first example, the third argument is the units (can be left empty or set to “DDT”), the fourth argument is the C pointer to the variable in memory, the fifth argument is an error flag, and the last (optional) argument is the position within ``cdata`` as in the first example. + Each new variable added to ``cdata`` is always placed at the next free position, and a check is performed to confirm that this position corresponds to the expected one, which in this example is 886. A mismatch will occur if a developer manually adds a call to ``ccpp_field_add``, in which case a costly binary search is applied every time a variable is retrieved from memory. Adding calls manually is not recommended as all calls to ``ccpp_fields_add`` should be auto-generated. -The individual physics caps used in the dynamic build, which are auto-generated using the script ``ccpp_prebuild.py``, contain calls to ``ccpp_field_get`` to pull data from the ``cdata`` DDT as a Fortran pointer to a variable that will be passed to the individual physics scheme. +The individual physics *caps* used in the dynamic build, which are auto-generated using the script ``ccpp_prebuild.py``, contain calls to ``ccpp_field_get`` to pull data from the ``cdata`` DDT as a Fortran pointer to a variable that will be passed to the individual physics scheme. ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, Initializing and Finalizing the CCPP @@ -207,9 +215,9 @@ Note that optional arguments are denoted with square brackets. Suite Initialization Subroutine ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -The suite initialization subroutine, ``ccpp_init``, takes three mandatory and two optional arguments. The mandatory arguments are the name of the suite (of type character), the name of the ``cdata`` variable that must be allocated at this point, and an integer used for the error status. Note that the suite initialization routine ``ccpp_init`` parses the SDF corresponding to the given suite name and initializes the state of the suite and its schemes. This process must be repeated for every element of a multi-dimensional ``cdata``. For performance reasons, it is possible to avoid repeated reads of the SDF and to have a single state of the suite shared between the elements of ``cdata``. To do so, specify an optional argument variable called ``cdata_target = X`` in the call to ``ccpp_init``, where X refers to the instance of ``cdata`` that has already been initialized. +The suite initialization subroutine, ``ccpp_init``, takes three mandatory and two optional arguments. The mandatory arguments are the name of the suite (of type character), the name of the ``cdata`` variable that must be allocated at this point, and an integer used for the error status. Note that the suite initialization routine ``ccpp_init`` parses the SDF corresponding to the given suite name and initializes the state of the suite and its schemes. This process must be repeated for every element of a multi-dimensional ``cdata``. For performance reasons, it is possible to avoid repeated reads of the SDF and to have a single state of the suite shared between the elements of ``cdata``. To do so, specify an optional argument variable called ``cdata_target = X`` in the call to ``ccpp_init``, where ``X`` refers to the instance of ``cdata`` that has already been initialized. -For a given suite name XYZ, the name of the suite definition file is inferred as ``suite_XYZ.xml``, and the file is expected to be present in the current run directory. It is possible to specify the optional argument ``is_filename=.true.`` to ``ccpp_init``, which will treat the suite name as an actual file name (with or without the path to it). +For a given suite name ``XYZ``, the name of the suite definition file is inferred as ``suite_XYZ.xml``, and the file is expected to be present in the current run directory. It is possible to specify the optional argument ``is_filename=.true.`` to ``ccpp_init``, which will treat the suite name as an actual file name (with or without the path to it). Typical calls to ``ccpp_init`` are below, where ``ccpp_suite`` is the name of the suite, and ``ccpp_sdf_filepath`` the actual SDF filename, with or without a path to it. @@ -241,7 +249,7 @@ If a specific data instance was used in a call to ``ccpp_init``, as in the above Running the physics ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, -The physics is invoked by calling subroutine ``ccpp_physics_run``. This subroutine is part of the CCPP API and is included with the CCPP-Framework (for the dynamic build) or auto-generated (for the static build). This subroutine is capable of executing the physics with varying granularity, that is, a single scheme (dynamic build only), a single group, or an entire suite can be run with a single subroutine call. Typical calls to ccpp_physics_run are below, where ``scheme_name`` and ``group_name`` are optional and mutually exclusive (dynamic build), and where ``suite_name`` is mandatory and ``group_name`` is optional (static build). +The physics is invoked by calling subroutine ``ccpp_physics_run``. This subroutine is part of the CCPP API and is included with the CCPP-Framework (for the dynamic build) or auto-generated (for the static build). This subroutine is capable of executing the physics with varying granularity, that is, a single scheme (dynamic build only), a single group, or an entire suite can be run with a single subroutine call. Typical calls to ``ccpp_physics_run`` are below, where ``scheme_name`` and ``group_name`` are optional and mutually exclusive (dynamic build), and where ``suite_name`` is mandatory and ``group_name`` is optional (static build). Dynamic build: @@ -305,16 +313,16 @@ Static build: Host Caps ======================================================== -The purpose of the host model cap is to abstract away the communication between the host model and the CCPP-Physics schemes. While CCPP calls can be placed directly inside the host model code (as is done for the relatively simple SCM), it is recommended to separate the cap in its own module for clarity and simplicity (as is done for the UFS Atmosphere). While the details of implementation will be specific to each host model, the host model cap is responsible for the following general functions: +The purpose of the host model *cap* is to abstract away the communication between the host model and the CCPP-Physics schemes. While CCPP calls can be placed directly inside the host model code (as is done for the relatively simple SCM), it is recommended to separate the *cap* in its own module for clarity and simplicity (as is done for the UFS Atmosphere). While the details of implementation will be specific to each host model, the host model *cap* is responsible for the following general functions: * Allocating memory for variables needed by physics - * All variables needed to communicate between the host model and the physics, and all variables needed to communicate among physics schemes, need to be allocated by the host model. The latter, for example for interstitial variables used exclusively for communication between the physics schemes, are typically allocated in the cap. + * All variables needed to communicate between the host model and the physics, and all variables needed to communicate among physics schemes, need to be allocated by the host model. The latter, for example for interstitial variables used exclusively for communication between the physics schemes, are typically allocated in the *cap*. -* Allocating the cdata structure(s) +* Allocating the ``cdata`` structure(s) - * For the dynamic build, the cdata structure handles the data exchange between the host model and the physics schemes, while for the static build, cdata is utilized in a reduced capacity. + * For the dynamic build, the ``cdata`` structure handles the data exchange between the host model and the physics schemes, while for the static build, ``cdata`` is utilized in a reduced capacity. * Calling the suite initialization subroutine @@ -322,22 +330,34 @@ The purpose of the host model cap is to abstract away the communication between * The suite must be initialized using ``ccpp_init``. -* Populating the cdata structure(s) +* Populating the ``cdata`` structure(s) - * For the dynamic build, each variable required by the physics schemes must be added to the cdata structure (or to each element of a multi-dimensional cdata) on the host model side using subroutine ``ccpp_field_add``. This is an automated task accomplished by inserting a preprocessor directive at the top of the cap (before implicit none) to load the required modules and a second preprocessor directive after the ``cdata`` variable and the variables required by the physics schemes are allocated and after the call to ``ccpp_init`` for this ``cdata`` variable. For the static build, this step can be skipped because the autogenerated caps for the physics (groups and suite caps) are automatically given memory access to the host model variables and they can be used directly, without the need for a data structure containing pointers to the actual variables (which is what ``cdata`` is). - -.. code-block:: fortran + * For the dynamic build, each variable required by the physics schemes must be added to the ``cdata`` + structure (or to each element of a multi-dimensional ``cdata``) on the host model side using subroutine + ``ccpp_field_add``. This is an automated task accomplished by inserting a preprocessor directive + + .. code-block:: fortran - #include ccpp_modules.inc + #include ccpp_modules.inc - #include ccpp_fields.inc + at the top of the cap (before implicit none) to load the required modules and a second preprocessor directive + + .. code-block:: fortran + + #include ccpp_fields.inc + + after the ``cdata`` variable and the variables required by the physics schemes are allocated and after the + call to ``ccpp_init`` for this ``cdata`` variable. For the static build, this step can be skipped because + the autogenerated *caps* for the physics (groups and suite *caps*) are automatically given memory access to the + host model variables and they can be used directly, without the need for a data structure containing pointers + to the actual variables (which is what ``cdata`` is). -* Note. The CCPP-Framework supports splitting physics schemes into different sets that are used in different parts of the host model. An example is the separation between slow and fast physics processes for the GFDL microphysics implemented in the UFS Atmosphere: while the slow physics are called as part of the usual model physics, the fast physics are integrated in the dynamical core. The separation of physics into different sets is determined in the CCPP prebuild configuration for each host model (see :numref:`Chapter %s `, and :numref:`Figure %s `), which allows to create multiple include files (e.g. ``ccpp_fields_slow_physics.inc`` and ``ccpp_fields_fast_physics.inc`` that can be used by different ``cdata`` structures in different parts of the model). This is a highly advanced feature and developers seeking to take further advantage of it should consult with GMTB first. + .. note:: The CCPP-Framework supports splitting physics schemes into different sets that are used in different parts of the host model. An example is the separation between slow and fast physics processes for the GFDL microphysics implemented in the UFS Atmosphere: while the slow physics are called as part of the usual model physics, the fast physics are integrated in the dynamical core. The separation of physics into different sets is determined in the CCPP *prebuild* configuration for each host model (see :numref:`Chapter %s `, and :numref:`Figure %s `), which allows to create multiple include files (e.g. ``ccpp_fields_slow_physics.inc`` and ``ccpp_fields_fast_physics.inc`` that can be used by different ``cdata`` structures in different parts of the model). This is a highly advanced feature and developers seeking to take further advantage of it should consult with GMTB first. * Providing interfaces to call the CCPP - * The cap must provide functions or subroutines that can be called at the appropriate places in the host model time integration loop and that internally call ``ccpp_init``, ``ccpp_physics_init``, ``ccpp_physics_run``, ``ccpp_physics_finalize`` and ``ccpp_finalize``, and handle any errors returned See :ref:`Listing 6.3 `. + * The *cap* must provide functions or subroutines that can be called at the appropriate places in the host model time integration loop and that internally call ``ccpp_init``, ``ccpp_physics_init``, ``ccpp_physics_run``, ``ccpp_physics_finalize`` and ``ccpp_finalize``, and handle any errors returned See :ref:`Listing 6.3 `. .. _example_ccpp_host_cap: @@ -345,65 +365,74 @@ The purpose of the host model cap is to abstract away the communication between module example_ccpp_host_cap - use ccpp_api, only: ccpp_t, ccpp_field_add, ccpp_init, ccpp_finalize, & - ccpp_physics_init, ccpp_physics_run, ccpp_physics_finalize - use iso_c_binding, only: c_loc - ! Include auto-generated list of modules for ccpp - #include "ccpp_modules.inc" + use ccpp_api, only: ccpp_t, ccpp_init, ccpp_finalize + use ccpp_static_api, only: ccpp_physics_init, ccpp_physics_run, & + ccpp_physics_finalize + implicit none - ! CCPP data structure + ! CCPP data structure type(ccpp_t), save, target :: cdata public :: physics_init, physics_run, physics_finalize contains - subroutine physics_init(ccpp_suite_name) - character(len=*), intent(in) :: ccpp_suite_name - integer :: ierr - ierr = 0 - ! Initialize the CCPP framework, parse SDF - call ccpp_init(ccpp_suite_name, cdata, ierr=ierr) - if (ierr/=0) then - write(*,'(a)') "An error occurred in ccpp_init" - stop - end if - ! Include auto-generated list of calls to ccpp_field_add - #include "ccpp_fields.inc" - ! Initialize CCPP physics (run all _init routines) - call ccpp_physics_init(cdata, ierr=ierr) - ! error handling as above - end subroutine physics_init - - subroutine physics_run(group, scheme) - ! Optional arguments group and scheme can be used - ! to run a group of schemes or an individual scheme - ! defined in the SDF. Otherwise, run entire suite. - character(len=*), optional, intent(in) :: group - character(len=*), optional, intent(in) :: scheme - integer :: ierr - ierr = 0 - if (present(scheme)) then - call ccpp_physics_run(cdata, scheme_name=scheme, ierr=ierr) - else if (present(group)) then - call ccpp_physics_run(cdata, group_name=group, ierr=ierr) - else - call ccpp_physics_run(cdata, ierr=ierr) - end if - ! error handling as above - end subroutine physics_run - - subroutine physics_finalize() - integer :: ierr - ierr = 0 - ! Finalize CCPP physics (run all _finalize routines) - call ccpp_physics_finalize(cdata, ierr=ierr) - ! error handling as above - call ccpp_finalize(cdata, ierr=ierr) - ! error handling as above - end subroutine physics_finalize + subroutine physics_init(ccpp_suite_name) + character(len=*), intent(in) :: ccpp_suite_name + integer :: ierr + ierr = 0 + + ! Initialize the CCPP framework, parse SDF + call ccpp_init(trim(ccpp_suite_name), cdata, ierr=ierr) + if (ierr/=0) then + write(*,'(a)') "An error occurred in ccpp_init" + stop + end if + + ! Initialize CCPP physics (run all _init routines) + call ccpp_physics_init(cdata, suite_name=trim(ccpp_suite_name), & + ierr=ierr) + ! error handling as above + + end subroutine physics_init + + subroutine physics_run(ccpp_suite_name, group) + ! Optional argument group can be used to run a group of schemes & + ! defined in the SDF. Otherwise, run entire suite. + character(len=*), intent(in) :: ccpp_suite_name + character(len=*), optional, intent(in) :: group + + integer :: ierr + ierr = 0 + + if (present(group)) then + call ccpp_physics_run(cdata, suite_name=trim(ccpp_suite_name), & + group_name=group, ierr=ierr) + else + call ccpp_physics_run(cdata, suite_name=trim(ccpp_suite_name), & + ierr=ierr) + end if + ! error handling as above + + end subroutine physics_run + + subroutine physics_finalize(ccpp_suite_name) + character(len=*), intent(in) :: ccpp_suite_name + integer :: ierr + ierr = 0 + + ! Finalize CCPP physics (run all _finalize routines) + call ccpp_physics_finalize(cdata, suite_name=trim(ccpp_suite_name), & + ierr=ierr) + ! error handling as above + call ccpp_finalize(cdata, ierr=ierr) + ! error handling as above + + end subroutine physics_finalize + end module example_ccpp_host_cap -*Listing 6.3: Fortran template for a CCPP host model cap -REF## --- also notes in google-doc --- needs updated example code!!!* +*Listing 6.3: Fortran template for a CCPP host model cap from* ``ccpp/framework/doc/DevelopersGuide/host_cap_template.F90``. + +The following sections describe two implementations of host model caps to serve as examples. For each of the functions listed above, a description for how it is implemented in each host model is included. ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, SCM Host Cap @@ -422,7 +451,7 @@ With smaller parts in: ``gmtb-scm/scm/src/gmtb_scm_time_integration.f90`` -The host model cap is responsible for: +The host model *cap* is responsible for: * Allocating memory for variables needed by physics @@ -438,7 +467,7 @@ The host model cap is responsible for: * Calling the suite initialization subroutine - Within ``scm_state%n_cols`` loop in ``gmtb_scm.F90`` after initial SCM state setup and before first timestep, the suite initialization subroutine ``ccpp_init`` is called for each column with own instance of ``cdata``, and takes three arguments, the name of the runtime SDF, the name of the cdata variable that must be allocated at this point, and ierr. + Within ``scm_state%n_cols`` loop in ``gmtb_scm.F90`` after initial SCM state setup and before first timestep, the suite initialization subroutine ``ccpp_init`` is called for each column with own instance of ``cdata``, and takes three arguments, the name of the runtime SDF, the name of the ``cdata`` variable that must be allocated at this point, and ``ierr``. * Populating the cdata structure @@ -456,7 +485,7 @@ The host model cap is responsible for: * call ``physics%associate()``: to associate pointers in physics DDT with targets in ``scm_state``, which contains variables that are modified by the SCM “dycore” (i.e. forcing). - * Actual cdata fill in through ``ccpp_field_add`` calls: + * Actual ``cdata`` fill in through ``ccpp_field_add`` calls: ``#include “ccpp_fields.inc”`` @@ -466,7 +495,7 @@ The host model cap is responsible for: * Calling ``ccpp_physics_init()`` - Within the same ``scm_state%n_cols`` loop but after ``cdata`` is filled, the physics initialization routines (\*_init()) associated with the physics suite, group, and/or schemes are called at each column. + Within the same ``scm_state%n_cols`` loop but after ``cdata`` is filled, the physics initialization routines (``*_init()``) associated with the physics suite, group, and/or schemes are called at each column. * Calling ``ccpp_physics_run()`` @@ -482,7 +511,7 @@ The host model cap is responsible for: UFS Atmosphere Host Cap ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, -For the UFS Atmosphere, there are slightly different versions of the host cap implementation depending on the desired build type (dynamic orstatic). As discussed in :numref:`Chapter %s `, these modes are controlled via appropriate strings included in the MAKEOPTS build-time argument. Within the source code, the three modes are executed within appropriate pre-processor directive blocks: +For the UFS Atmosphere, there are slightly different versions of the host cap implementation depending on the desired build type (dynamic or static). As discussed in :numref:`Chapter %s `, these modes are controlled via appropriate strings included in the MAKEOPTS build-time argument. Within the source code, the three modes are executed within appropriate pre-processor directive blocks: For any build that uses CCPP (dynamic orstatic): @@ -491,7 +520,7 @@ For any build that uses CCPP (dynamic orstatic): #ifdef CCPP #endif -For static (often nested within #ifdef CCPP): +For static (often nested within ``#ifdef CCPP``): .. code-block:: fortran @@ -502,15 +531,15 @@ The following text describes how the host cap functions listed above are impleme * Allocating memory for variables needed by physics - * Within the atmos_model_init subroutine of atmos_model.F90, the following statement is executed + * Within the ``atmos_model_init`` subroutine of ``atmos_model.F90``, the following statement is executed ``allocate(IPD_Data)`` ``IPD_Data`` is of ``IPD_data_type``, which is defined in ``IPD_typedefs.F90`` as a synonym for ``GFS_data_type`` defined in ``GFS_typedefs.F90``. This data type contains GFS-related DDTs (``GFS_statein_type``, ``GFS_stateout_type``, ``GFS_sfcprop_type``, etc.) as sub-types, which are defined in ``GFS_typedefs.F90``. -* Allocating the cdata structures +* Allocating the ``cdata`` structures - * For the current implementation of the UFS Atmosphere, which uses a subset of fast physics processes tightly coupled to the dynamical core, three instances of ``cdata`` exist within the host model: ``cdata_tile`` to hold data for the fast physics, ``cdata_domain`` to hold data needed for all UFS Atmosphere blocks for the slow physics, and ``cdata_block``, an array of ``cdata`` DDTss with dimensions of (``number of blocks``, ``number of threads``) to contain data for individual block/thread combinations for the slow physics. All are defined as module-level variables in the ``CCPP_data module`` of ``CCPP_data.F90``. The ``cdata_block`` array is allocated (since the number of blocks and threads is unknown at compile-time) as part of the ‘init’ step of the ``CCPP_step subroutine`` in ``CCPP_driver.F90``. Note: Although the ``cdata`` containers are not used to hold the pointers to the physics variables for the static mode, they are still used to hold other CCPP-related information for that mode. + * For the current implementation of the UFS Atmosphere, which uses a subset of fast physics processes tightly coupled to the dynamical core, three instances of ``cdata`` exist within the host model: ``cdata_tile`` to hold data for the fast physics, ``cdata_domain`` to hold data needed for all UFS Atmosphere blocks for the slow physics, and ``cdata_block``, an array of ``cdata`` DDTs with dimensions of (``number of blocks``, ``number of threads``) to contain data for individual block/thread combinations for the slow physics. All are defined as module-level variables in the ``CCPP_data module`` of ``CCPP_data.F90``. The ``cdata_block`` array is allocated (since the number of blocks and threads is unknown at compile-time) as part of the ``‘init’`` step of the ``CCPP_step subroutine`` in ``CCPP_driver.F90``. Note: Although the ``cdata`` containers are not used to hold the pointers to the physics variables for the static mode, they are still used to hold other CCPP-related information for that mode. * Calling the suite initialization subroutine @@ -524,7 +553,7 @@ The following text describes how the host cap functions listed above are impleme * When the dynamic mode is used, the ``cdata`` structures are filled with pointers to variables that are used by physics and whose memory is allocated by the host model. This is done using ``ccpp_field_add`` statements contained in the autogenerated include files. For the fast physics, this include file is named ``ccpp_fields_fast_physics.inc`` and is placed after the call to ``ccpp_init`` for ``cdata_tile`` in the ``atmosphere_init`` subroutine of ``atmosphere.F90``. For populating ``cdata_domain`` and ``cdata_block``, IPD data types are initialized in the ``atmos_model_init`` subroutine of ``atmos_model.F90``. The ``Init_parm`` DDT is filled directly in this routine and ``IPD_initialize`` (pointing to ``GFS_initialize`` and for populating diagnostics and restart DDTs) is called in order to fill the GFS DDTs that are used in the physics. Once the IPD data types are filled, they are passed to the ‘init’ step of the ``CCPP_step`` subroutine in ``CCPP_driver.F90`` where ``ccpp_field_add`` statements are included in ``ccpp_fields_slow_physics.inc`` after the calls to ``ccpp_init`` for the ``cdata_domain`` and ``cdata_block`` containers. - * Note: for the static mode, filling of the cdata containers with pointers to physics variables is not necessary. This is because the autogenerated caps for the physics groups (that contain calls to the member schemes) can fill in the argument variables without having to retrieve pointers to the actual data. This is possible because the host model metadata tables (that are known at ccpp_prebuild time) contain all the information needed about the location (DDTs and local names) to pass into the autogenerated caps for their direct use. + * Note: for the static mode, filling of the ``cdata`` containers with pointers to physics variables is not necessary. This is because the autogenerated *caps* for the physics groups (that contain calls to the member schemes) can fill in the argument variables without having to retrieve pointers to the actual data. This is possible because the host model metadata tables (that are known at ccpp_prebuild time) contain all the information needed about the location (DDTs and local names) to pass into the autogenerated *caps* for their direct use. * Providing interfaces to call the CCPP diff --git a/doc/CCPPtechnical/source/index.rst b/doc/CCPPtechnical/source/index.rst index a27113ad..e4536ee2 100644 --- a/doc/CCPPtechnical/source/index.rst +++ b/doc/CCPPtechnical/source/index.rst @@ -18,7 +18,7 @@ CCPP Technical Documentation HostSideCoding CodeManagement CCPPPreBuild - BuildingRunningHostMdoels + BuildingRunningHostModels AddingNewSchemes Acronyms Glossary From d571fc6ccb6fe3f622f56e7addcb7b3fa15af662 Mon Sep 17 00:00:00 2001 From: Julie Schramm Date: Mon, 17 Jun 2019 14:37:22 -0600 Subject: [PATCH 2/2] Update links to scientific documentation to v3 Add warning to top of Chapter 1 about relevance of document and pointing to current information. Add authors to PDF document in conf.py Add info to second page of pdf document on how to reference document. Build of html and pdf files successful and copied to server. --- .../source/BuildingRunningHostModels.rst | 2 +- doc/CCPPtechnical/source/Overview.rst | 14 +++++++++++++- doc/CCPPtechnical/source/ScientificDocRules.inc | 7 +++---- doc/CCPPtechnical/source/conf.py | 6 ++++-- 4 files changed, 21 insertions(+), 8 deletions(-) diff --git a/doc/CCPPtechnical/source/BuildingRunningHostModels.rst b/doc/CCPPtechnical/source/BuildingRunningHostModels.rst index 5bea7dfd..871baef2 100644 --- a/doc/CCPPtechnical/source/BuildingRunningHostModels.rst +++ b/doc/CCPPtechnical/source/BuildingRunningHostModels.rst @@ -460,4 +460,4 @@ Compatibility between the Code Base, the SDF, and the Namelist in the UFS Atmosp The variable ``suite_name`` within the ``namelist.input`` file used in the UFS Atmosphere determines which suite will be employed at run time (e.g., ``suite_name=FV3_GFS_v15``). It is the user’s responsibility to ascertain that the other variables in ``namelist.input`` are compatible with the chosen suite. When runs are executed using the RT framework described in the preceding sections, compatibility is assured. For new experiments, users are responsible for modifying the two files (``SDF`` and ``namelist.input``) consistently, since limited checks are in place. -Information about the UFS Atmosphere physics namelist can be found with the CCPP Scientific Documentation at xxx (**!!!LB: Update for release**). +Information about the UFS Atmosphere physics namelist can be found with the CCPP Scientific Documentation at https://dtcenter.org/GMTB/v3.0/sci_doc/. diff --git a/doc/CCPPtechnical/source/Overview.rst b/doc/CCPPtechnical/source/Overview.rst index f9939a17..ad479670 100644 --- a/doc/CCPPtechnical/source/Overview.rst +++ b/doc/CCPPtechnical/source/Overview.rst @@ -4,6 +4,18 @@ CCPP Overview ************************* +.. warning:: The information in this document is up to date with the CCPP and GMTB SCM v3 public + release as of June 17, 2019. If you are a developer looking for more current information, please + obtain and build the up-to-date Technical Documentation from the master branch of the ccpp-framework + code repository: + + .. code-block:: console + + git clone https://github.com/NCAR/ccpp-framework + cd ccpp-framework/doc/CCPPtechnical + make html + make latexpdf + Ideas for this project originated within the Earth System Prediction Capability (ESPC) physics interoperability group, which has representatives from the US National Center for Atmospheric Research (NCAR), the Navy, National Oceanic and Atmospheric Administration @@ -136,7 +148,7 @@ future operational implementations. The GFS_v15plus suite is the same as the GFS except using the Turbulent Kinetic Energy (TKE)-based EDMF PBL scheme. The Climate Process Team (CPT) v0 suite (CPT_v0) uses the aerosol-aware (aa) Morrison-Gettelman 3 (MG3) microphysics scheme and Chikira-Sugiyama convection scheme with Arakawa-Wu extension (CSAW). The NOAA Global -Systems Division (GSD) v0 suite (GFS_v0) includes aaThompson microphysics, +Systems Division (GSD) v0 suite (GSD_v0) includes aaThompson microphysics, Mellor-Yamada-Nakanishi-Niino (MYNN) PBL and shallow convection, Grell-Freitas (GF) deep convection schemes, and the Rapid Update Cycle (RUC) LSM.* diff --git a/doc/CCPPtechnical/source/ScientificDocRules.inc b/doc/CCPPtechnical/source/ScientificDocRules.inc index 5cf941e4..0f601ee2 100644 --- a/doc/CCPPtechnical/source/ScientificDocRules.inc +++ b/doc/CCPPtechnical/source/ScientificDocRules.inc @@ -28,7 +28,7 @@ documented schemes. Reviewing the documentation for CCPP parameterizations is a good way of getting started in writing documentation for a new scheme. The CCPP Scientific Documentation can be converted to html format -(see https://dtcenter.org/gmtb/users/ccpp/docs/sci_doc_v2/). +(see https://dtcenter.org/GMTB/v3.0/sci_doc/). Doxygen Comments and Commands ----------------------------- @@ -145,7 +145,7 @@ as defined in - \ref GFS_CALPRECIPTYPE - \ref STOCHY_PHYS -The HTML result is `here `_. +The HTML result is `here `_. You can see that the ``“-”`` signs before ``“@ref”`` generate a list with bullets. Doxygen command ``“\\c”`` displays its argument using a typewriter font. @@ -183,7 +183,6 @@ sense to choose label names that refer to their context. */ -The HTML result is `here `__. The physics scheme page will often describe the following: 1. Description section (``“\\section”``), which usually includes: @@ -237,7 +236,7 @@ is used to aggregate all code related to that scheme, even when it is in separat files. Since doxygen cannot know which files or subroutines belong to each physics scheme, each relevant subroutine must be tagged with the module name. This allows doxygen to understand your modularized design and generate the documentation accordingly. -`Here `__ +`Here `_ is a list of module list defined in CCPP. A module is defined using: diff --git a/doc/CCPPtechnical/source/conf.py b/doc/CCPPtechnical/source/conf.py index 8849b81d..00e36aca 100644 --- a/doc/CCPPtechnical/source/conf.py +++ b/doc/CCPPtechnical/source/conf.py @@ -21,7 +21,7 @@ project = 'CCPP Technical' copyright = '2019 ' -author = ' ' +author = 'J. Schramm, L. Bernardet, L. Carson, \\\ G. Firl, D. Heinzeller, L. Pan, and M. Zhang' # The short X.Y version version = 'v3.0.0' @@ -123,6 +123,7 @@ def setup(app): # -- Options for LaTeX output ------------------------------------------------ +latex_engine = 'pdflatex' latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # @@ -139,6 +140,7 @@ def setup(app): # Latex figure (float) alignment # # 'figure_align': 'htbp', + 'maketitle': r'\newcommand\sphinxbackoftitlepage{For referencing this document please use: \newline \break Schramm, J., L. Bernardet, L. Carson, G. Firl, D. Heinzeller, L. Pan, and M. Zhang, 2019. CCPP Technical Documentation Release v3.0.0. 91pp. Available at https://dtcenter.org/GMTB/v3.0/ccpp\_tech\_guide.pdf.}\sphinxmaketitle' } # Grouping the document tree into LaTeX files. List of tuples @@ -146,7 +148,7 @@ def setup(app): # author, documentclass [howto, manual, or own class]). latex_documents = [ (master_doc, 'CCPPtechnical.tex', 'CCPP Technical Documentation', - ' ', 'manual'), + author,'manual'), ]