-
Notifications
You must be signed in to change notification settings - Fork 631
Validation Case Setup Example
An example step-by-step procedure for authoring a validation case, from setting up the input files to compiling the FDS Validation Guide.
Before getting started, are you conversant in Matlab and LaTeX. No? Do yourself a favor: take a week out of your busy life and learn the basics of these tools. I guarantee your investment will payoff a thousand fold!
The purpose of this page is to document, through example, all of the minute steps required to get a validation case from conception to full automation, such that the FDS Validation Guide can be built without error by pressing a single button. Similar procedures also apply to the FDS Verification Guide.
The reader should take note of the files that need to be committed. In the text below, these files will be highlighted with a (commit) tag. The most common problem we encounter when compiling the guides is that an author will assume their case is fully committed because they are able to compile the guide on their machine, but if they have not committed all the necessary files the guide will not compile for the other project members.
Each validation case has a sub-directory in the Validation directory of the FDS-SMV repository. Choose the name of this directory carefully, because it will be used throughout the process. Each directory consists of the sub-directories named Experimental_Data
, FDS_Input_Files
, and FDS_Output_Files
. There is also a bash shell script called Run_All.sh
. As its name implies, this script runs the cases by copying the FDS input files into a new directory called Current_Results
and then running them. Once the cases have completed, there is another script in FDS_Output_Files
called Process_Output.csh
that copies over just the output files from Current_Results
that are to be committed to the FDS-SMV repository. It is too time-consuming to run the validation cases each night like the verification cases, thus, we commit the necessary output files to the repository.
The first thing to think about is what the case names will be. This is more important than it may seem at first. The reason is that the names work their way into the depths of the processing scripts through attachments to output data files, etc., and making any changes later can prove to be a nightmare. So, do all of us a favor and think carefully about the naming convention. The names ought to be descriptive, but somewhat compact. We also want to credit the test lab, sponsor, or individual researcher, if appropriate. Thus, the NRC_NIST
series are experiments that were sponsored by the US Nuclear Regulatory Commission and conducted at NIST. The Steckler_Compartment
series are experiments conducted by Ken Steckler at NIST.
The input and experimental data files ought to include the basic series name. Thus, the input files for Steckler_Compartment
are Steckler_010.fds
, Steckler_165.fds
, and so on. The numbers should be tied directly to the test report. Do not reinvent test numbers --- use whatever convention that is in the test report, no matter how illogical it might seem.
- The first issue to discuss about input files is the text editor used to create them. Be careful not to commit DOS files with "^M" carriage returns. This happens commonly with Windows-based text editors. The safe bet is to look at your files in vi before committing.
- Make sure your CHID is the same as the input filename (minus the extension).
- Minimize input parameters. Do not add a parameter to the input file if it is already the FDS default.
- Minimize comments. Some users write dissertations inside their input files. Save this for the guide write up.
- Minimize the size of the output files. In most cases we do not need many slice files or time history devices. When possible, use line devices (POINTS=...) to gather statistics. This both reduces the clutter in the input file and reduces the size of the output. Usually the device files for validation cases are committed, and we want to minimize the size of the files.
In this section we will discuss the process of comparing the FDS results with experimental data. There are a few ways this can be done. The easiest path is if the expected results and the FDS results can be plotted with the Utilities/Matlab/scripts/dataplot.m
Matlab script. This is usually the case if we are just looking at time histories of temperature, heat flux, and so on. However, if a significant amount of post-processing calculations must be done before direct comparisons can be made, it may be necessary to create your own Matlab script to generate the plots for the FDS Validation Guide. Note that at present Matlab is the standard. This is the only way we have to make everything look identical throughout the guides.
The experimental data should be organized in a simple, comma-delimited file (.csv) with the data organized into columns with simple, clear header names. Basically, the same naming conventions follow within the data file as we listed above for input filenames: no decimals, spaces, etc. Once the data have been organized, you may commit the file to the Experimental_Data
sub-directory.
Important quantities when comparing FDS to an experiment is the HGL temperature and height. However, neither of these quantities is measured or computed directly. Rather, they are computed from vertical array(s) of thermocouples. When you run the FDS_validation_script.m
, a script called layer_height.m
is called. This script scans all of the FDS_Output_Files
directories for all the cases listed in Validation\Process_All_Output.sh
, and whenever it finds a file that ends with _HGL.input
, it computes the HGL temperature and height for that particular case. This input file has the following format:
3 Number of TC trees
18 Number of TCs per tree
0.05 54 72 90 Height (m) of the lowest TCs, and the column numbers in the devc file
0.30 55 73 91
0.55 56 74 92
0.80 57 75 93
1.05 58 76 94
1.30 59 77 95
1.55 60 78 96
1.80 61 79 97
2.05 62 80 98
2.30 63 81 99
2.55 64 82 100
2.80 65 83 101
3.05 66 84 102
3.30 67 85 103
3.55 68 86 104
3.80 69 87 105
3.85 70 88 106
3.90 71 89 107 Height (m) of the highest TCs, and the column numbers in the devc file
0.33 0.33 0.34 Weighting factors for each TC tree
PRS_D1_devc.csv The file containing the TC data
110 Number of columns in the file
3 Row to start reading data
4.00 Actual ceiling height of the compartment
-60 Time to begin reading the data
PRS_D1_Room_2_HGL.csv File to hold the computed layer height and temperatures
You only need to commit these _HGL.input
scripts to the repository. The layer_height.m
script will automatically create the _HGL.csv
file. You can then invoke this file when running the dataplot.m
script or your own special script.
As we mentioned before, the easiest thing to do when adding validation cases is to organize your output so that the dataplot script can be used to generate the plots. In this way, all the fonts, line widths, etc., are handled automatically. To add a plot via dataplot, you must add a line to the spreadsheet FDS_validation_dataplot_inputs.csv
. All the examples you could need are there in the spreadsheet. The entries are organized by the type of output quantity. The first rows are for HGL (Hot Gas Layer) temperature. Most compartment fire validation cases include a comparison of HGL temperature and depth.
To best understand all of the columns in the file FDS_validation_dataplot_inputs.csv
, just use an existing line as an example. Each line names the experimental data file, the FDS data file, and the various plot parameters.
The key is to note the column where the plot will be saved. It will be saved in the Manuals directory, NOT the Validation directory. In our case, the plot should be printed to Manuals/FDS_Validation_Guide/SCRIPT_FIGURES/[Series Name]/plotname.pdf
. Note: DO NOT COMMIT THIS FILE TO THE FDS-SMV repository. It is generated automatically by scripts that run each night.
Plots that may change when the code is modified are stored in SCRIPT_FIGURES
. Other images that are needed in the guides are permanently archived in the FIGURES
directory.
Each line of the FDS_validation_dataplot_inputs.csv
file produces a single plot with one or more pairs of curves that are to be compared. The difference between each pair of curves produces a single point on a single scatter plot for the given output quantity of interest.
Quantity
This is the name of the scatter plot that will contain the given comparison point(s). This name has to match exactly the entry in the file called `FDS_validation_scatterplot_inputs.csv'.
Metric
-
all: Compares the values of all points along each curve.
-
area: Compares the value of the area under each curve.
-
end: Compares the value of the last point on each curve.
-
end_x_y: Compares the value of the last point on each specified curve, where x is the number of the expected curve that you want to compare to curve y of the predicted metric. The values x and y correspond to the order that the curves are listed in the
d1_Dep_Col_Name
andd2_Dep_Col_Name
columns. For example, to compare the end points of curves 1 and 3, you would specify end_1_3. -
max: Compares the maximum (peak) value of each curve.
-
mean: Compares the mean value of each curve.
-
mean_x_y: Compares the mean value on each specified curve, where x is the number of the expected curve that you want to compare to curve y of the predicted metric. For example, to compare the mean of curves 2 and 2, you would specify mean_2_2.
-
min: Compares the minimum value of each curve.
-
threshold: Compares the independent value when the dependent value reaches some threshold. For example, to compare the time to threshold temperature for cables, the
d1_Comp_Start
value is set to the threshold temperature, and then the time at which this temperature is attained is reported.
Error_Tolerance
This is not used for validation work. Just put 0.
Group_Key_Label
The name of the test series, as you want it to appear on the scatter plots.
Group_Style
The color and shape of the symbol on the scatter plot. Each test series has a unique identifier. For example, the series ATF Corridors
uses ro
, a red circle.
Fill_Color
You can opt to fill the symbol defined by Group_Style
.
If the plots produced by dataplot.m
are not appropriate for your case, you might have to write your own Matlab script to process the data. If you are Matlab savvy, this is not too hard. If not, it is a great way to learn Matlab! The most difficult part of all this for the uninitiated seems to be keeping the repository directory structure straight in their heads while creating the plotting routines. This is critical, because ultimately the script will live here:
Utilities/Matlab/scripts
But... it will be launched via the master script (FDS_validation_script.m
) which lives here:
Utilities/Matlab/
These relative directories can be confusing at first. But follow this example, and you will get things right from here on.
The first thing to do is read in the experimental data and the FDS data, perform whatever calculations are necessary, and plot the results. In the next section we will worry about the details of how the plot looks.
As usual, think of a good name for your script and add it to the master list of special scripts under "Special cases" in FDS_validation_script.m
. While you are developing the script, it is usually helpful to comment out all the other calls above your script. This allows you to run your script in "production mode" from exactly the directory where it will live permanently. Mark my words: If you decide to do all your development offline in some other directory, you will have all kinds of trouble merging everything in once you think you are finished. Please, do not think that because you can generate an Excel plot that you are anywhere close to finished.
The best advice for creating your own script is to copy the format of one that already exists.
Before you commit all your new entries to dataplot.m
or your new scripts, you should run the master Matlab script FDS_validation_script.m
and inspect the plots. However, running this entire script takes a lot of time. To save time, comment out with a %
all the scripts you don't need. Leave as is all the directory definitions. Change the line:
[saved_data,drange] = dataplot(Dataplot_Inputs_File, Working_Dir, Manuals_Dir);
to
[saved_data,drange] = dataplot(Dataplot_Inputs_File, Working_Dir, Manuals_Dir, 'My Tests');
where My Tests
is the second column of FDS_validation_dataplot_inputs.csv
, under the header Dataname
. By doing this, only these plots will be processed.
Remember to restore FDS_validation_script.m
to its original form before committing.
The LaTeX source for the guide is located in Manuals/FDS_Validation_Guide/FDS_Validation_Guide.tex
. The source files for the guide are organized by chapter, like HGL Temperature and Depth
. Edit this file with whatever editor you choose. Decide where your section should reside. Typically results are organized by output quantity, and test series names are often entered in alphabetical order. Follow the existing conventions.
As with the Matlab script, conforming to the conventions we have established in the guide is basically a matter of copying and editing an existing section.
Depending on your computer setup, there are different ways to compile the document. But the basic sequence of operations is the same. You have to run LaTeX once, then run BibTeX once, then LaTeX two more times to resolve all references. In Linux it looks like this (note that you do not include the extension for the bibtex run):
$ pdflatex FDS_Validation_Guide.tex
$ bibtex FDS_Validation_Guide
$ pdflatex FDS_Validation_Guide.tex
$ pdflatex FDS_Validation_Guide.tex
In between each run you will see a bunch of cryptic log messages that fly by too fast to read. Hence the next step.
Before you commit your edits to the guide, (of course first update your repository) in a bash shell cd to Manuals/FDS_Validation_Guide
and run
$ ./make_guide.sh
This script will compile the LaTeX document by running pdflatex, bibtex, pfdlatex, pdflatex, which is the sequence needed to resolve all references in the document. The script then checks for warnings or error messages associated with include files (like plots) that are not found. If all goes well, you will get something like
FDS Validation Guide built successfully!
If all is not well, you may see something like
! LaTeX Error: File SCRIPT_FIGURES/heated_channel_Tplus not found.
Once you have compiled the guide without errors or warnings, go ahead and update your repository and commit the .tex and .bib files.
If everything has been done correctly, then our automated user, Firebot, will be able to generate your plots, and compile the guide while you slumber. If something has been missed, we will all wake to the following email: Firebot angry! Well, mildly upset. Not to worry. The great thing about the repository is that we all iterate and fix our errors as soon as possible.