You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here are some initial thoughts about design for the analysis routines. Feedback (wish lists? prioritization? reorganization?) is heartily welcomed! @jhamman@tcchiao@norlandrhagen
Structure: qaqc.py
Will include checking for:
nans
aphysical quantities (e.g. temp < 0K)
aphysical temporal patterns (a simple seasonality check on temperature?)
metrics.py
Will include:
comparison of downscaled/bias-corrected outputs with training data for historical period
assessment of projected future changes
Comparisons will occur:
for different variables
at different timescales and aggregations (daily, weekly, monthly, annual, decadal)
for different frequencies (assess different return intervals)
plotting.py
Will include:
ability to visualize results from qaqc.py and metrics.py
analysis.py
Will include:
automatic running of set of plotting routines for every downscaling run
ability to compare across two downscaling runs (e.g. feed in runs from two different downscaling set-ups and generate a diff of the outputs from metrics for those two runs)
ensemble analyses (for comparing more than 2 runs)
The text was updated successfully, but these errors were encountered:
QAQC (drawn from qaqc.py). It would be great to also include functionality to check for missing chunks.
Assessment of performance with respect to a benchmark. This is dataset agnostic and will take as input a pointer to a benchmark dataset. For example, it could either be of a historical downscaled GCM dataset compared to the historical training dataset (which will be what will run automatically after every downscaled run) or comparing GCM to a coarsened historical training dataset (how good was raw GCM)
Future changes for a given GCM (across three different climatic periods: 2030s, 2050s, 2080s)
Comparisons across multiple datasets (e.g. multiple GCMs, multiple downscaling methods)
1-3 will run automatically as part of every downscaling flow execution. 4 will run on demand to compare multiple selected datasets.
Some other things to think about:
Do we want to make summary files as a byproduct of analysis runs or have an automatic summary-file calculation step that we always do and then analysis reads from those files (as in, have a step 0 which calculates summary files)
We will use the coarsened ERA5 files as part of the downscaling but then also in analysis- do we want to save those more permanently?
Plots:
For regional averages (SREX regions) and selection of individual points (tower locations?)
Here are some initial thoughts about design for the analysis routines. Feedback (wish lists? prioritization? reorganization?) is heartily welcomed! @jhamman @tcchiao @norlandrhagen
Structure:
qaqc.py
Will include checking for:
metrics.py
Will include:
Comparisons will occur:
plotting.py
Will include:
qaqc.py
andmetrics.py
analysis.py
Will include:
metrics
for those two runs)The text was updated successfully, but these errors were encountered: