Skip to content

Commit

Permalink
Merge pull request #95 from dklein-pik/master
Browse files Browse the repository at this point in the history
Remove `iamCheck`
  • Loading branch information
dklein-pik authored Mar 12, 2024
2 parents cbe65c2 + 6b6af94 commit 0a19691
Show file tree
Hide file tree
Showing 11 changed files with 298 additions and 126 deletions.
2 changes: 1 addition & 1 deletion .buildlibrary
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
ValidationKey: '3760100'
ValidationKey: '3780463'
AcceptedWarnings:
- 'Warning: package ''.*'' was built under R version'
- 'Warning: namespace ''.*'' is not available and has been replaced'
Expand Down
4 changes: 2 additions & 2 deletions CITATION.cff
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@ cff-version: 1.2.0
message: If you use this software, please cite it using the metadata from this file.
type: software
title: 'modelstats: Run Analysis Tools'
version: 0.19.0
date-released: '2024-03-08'
version: 0.19.1
date-released: '2024-03-11'
abstract: A collection of tools to analyze model runs.
authors:
- family-names: Giannousakis
Expand Down
5 changes: 2 additions & 3 deletions DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
Package: modelstats
Type: Package
Title: Run Analysis Tools
Version: 0.19.0
Date: 2024-03-08
Version: 0.19.1
Date: 2024-03-11
Authors@R: c(
person("Anastasis", "Giannousakis", email = "[email protected]", role = c("aut","cre")),
person("Oliver", "Richters", role = "aut")
Expand All @@ -14,7 +14,6 @@ Imports:
crayon,
gdx,
gtools,
piamModelTests,
lubridate,
lucode2,
utils,
Expand Down
1 change: 0 additions & 1 deletion NAMESPACE
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ importFrom(lucode2,sendmail)
importFrom(magclass,collapseNames)
importFrom(magclass,read.report)
importFrom(magclass,write.report)
importFrom(piamModelTests,iamCheck)
importFrom(quitte,read.quitte)
importFrom(remind2,compareScenarios2)
importFrom(rlang,.data)
Expand Down
26 changes: 2 additions & 24 deletions R/modeltests.R
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,6 @@
#' @param model Model name
#' @param test Use this option to run a test of the workflow (no runs will be submitted)
#' The test parameter needs to be of the form "YY-MM-DD"
#' @param iamccheck Use this option to turn iamc-style checks on and off
#' @param email whether an email notification will be send or not
#' @param mattermostToken token used for mattermost notifications
#' @param compScen whether compScen has to run or not
Expand All @@ -22,7 +21,6 @@
#' @seealso \code{\link{package2readme}}
#' @importFrom utils read.csv2
#' @importFrom dplyr filter %>%
#' @importFrom piamModelTests iamCheck
#' @importFrom quitte read.quitte
#' @importFrom lucode2 sendmail
#' @importFrom remind2 compareScenarios2
Expand All @@ -35,7 +33,6 @@ modeltests <- function(
model = NULL,
user = NULL,
test = NULL,
iamccheck = TRUE,
email = TRUE,
compScen = TRUE,
mattermostToken = NULL,
Expand All @@ -57,7 +54,7 @@ modeltests <- function(
message("Found 'end' in ", normalizePath("../.testsstatus"), "\nCalling 'evaluateRuns'")
withr::with_dir("output", {
evaluateRuns(model = model, mydir = mydir, gitPath = gitPath, compScen = compScen, email = email,
mattermostToken = mattermostToken, gitdir = gitdir, iamccheck = iamccheck, user = user)
mattermostToken = mattermostToken, gitdir = gitdir, user = user)
})
# make sure next call will start runs
message("Writing 'start' to ", normalizePath("../.testsstatus"))
Expand Down Expand Up @@ -178,7 +175,7 @@ startRuns <- function(test, model, mydir, gitPath, user) {
message("Function 'startRuns' finished.")
}

evaluateRuns <- function(model, mydir, gitPath, compScen, email, mattermostToken, gitdir, iamccheck, user, test = NULL) {
evaluateRuns <- function(model, mydir, gitPath, compScen, email, mattermostToken, gitdir, user, test = NULL) {
message("Current working directory ", normalizePath("."))
if (is.null(test)) test <- readRDS(paste0(mydir, "/test.rds"))
if (!test) {
Expand Down Expand Up @@ -391,25 +388,6 @@ evaluateRuns <- function(model, mydir, gitPath, compScen, email, mattermostToken
}
}

if (iamccheck) {
a <- NULL
if (length(runsStarted) > 0) {
mifs <- paste0(runsStarted, "/REMIND_generic_", sub("_20[0-9][0-9].*.$", "", runsStarted), ".mif")
mifs <- mifs[file.exists(mifs)]
try(a <- read.quitte(mifs))
if (!is.null(a)) {
out[["iamCheck"]] <- iamCheck(a, cfg = model)
if (!test) {
saveRDS(out[["iamCheck"]], file = paste0("iamccheck-", commitTested, ".rds"))
} else {
saveRDS(out[["iamCheck"]], file = paste0("iamccheck-", test, ".rds"))
}
}
}
write(paste0("The IAMC check of these runs can be found in /p/projects/remind/modeltests/remind/output/iamccheck-",
commitTested, ".rds", "\n"), myfile, append = TRUE)
}

if(length(runsStarted) < 1) errorList <- c(errorList, "No runs started")

if(is.null(errorList)) {
Expand Down
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Run Analysis Tools

R package **modelstats**, version **0.19.0**
R package **modelstats**, version **0.19.1**

[![CRAN status](https://www.r-pkg.org/badges/version/modelstats)](https://cran.r-project.org/package=modelstats) [![R build status](https://github.com/pik-piam/modelstats/workflows/check/badge.svg)](https://github.com/pik-piam/modelstats/actions) [![codecov](https://codecov.io/gh/pik-piam/modelstats/branch/master/graph/badge.svg)](https://app.codecov.io/gh/pik-piam/modelstats) [![r-universe](https://pik-piam.r-universe.dev/badges/modelstats)](https://pik-piam.r-universe.dev/builds)

Expand Down Expand Up @@ -47,7 +47,7 @@ In case of questions / problems please contact Anastasis Giannousakis <giannou@p

To cite package **modelstats** in publications use:

Giannousakis A, Richters O (2024). _modelstats: Run Analysis Tools_. R package version 0.19.0, <https://github.com/pik-piam/modelstats>.
Giannousakis A, Richters O (2024). _modelstats: Run Analysis Tools_. R package version 0.19.1, <https://github.com/pik-piam/modelstats>.

A BibTeX entry for LaTeX users is

Expand All @@ -56,7 +56,7 @@ A BibTeX entry for LaTeX users is
title = {modelstats: Run Analysis Tools},
author = {Anastasis Giannousakis and Oliver Richters},
year = {2024},
note = {R package version 0.19.0},
note = {R package version 0.19.1},
url = {https://github.com/pik-piam/modelstats},
}
```
3 changes: 0 additions & 3 deletions man/modeltests.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions modelstats.Rproj
Original file line number Diff line number Diff line change
Expand Up @@ -18,3 +18,4 @@ StripTrailingWhitespace: Yes
BuildType: Package
PackageUseDevtools: Yes
PackageInstallArgs: --no-multiarch --with-keep.source
PackageRoxygenize: rd,collate,namespace
2 changes: 1 addition & 1 deletion vignettes/testingSuite.R
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
## ----setup, include=FALSE-----------------------------------------------------
## ----setup, include=FALSE-----------------------------------------------------------------------------------------------------------------------------------
knitr::opts_chunk$set(echo = TRUE)

12 changes: 1 addition & 11 deletions vignettes/testingSuite.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -25,22 +25,12 @@ It is fully automated (runs every weekend), documents its findings in a gitlab p

## How it works:

A scheduled and recurring job runs on the cluster every weekend, starting model runs according to a predefined set of scenarios. Once the runs are finished the job creates an overview of the results using [rs2](http://htmlpreview.github.io/?https://github.com/pik-piam/modelstats/blob/master/vignettes/rs2.html), and (optionally) additionally runs [compareScenarios](https://github.com/pik-piam/remind2/blob/master/R/compareScenarios.R) comparing each run with the previous run with the same scenario name (the comparison PDF files are found in each scenario folder, they also contain comparisons with historical data). It also uploads all runs to the [shinyResults::appResults](https://github.com/pik-piam/shinyresults/blob/master/R/appResults.R) app for easy and interactive viewing. Also here a comparison with historical data is found. Further, the results of all runs are gathered, passed through an [IAMC-style](https://github.com/IAMconsortium/iamc) check for consistency, and archived. These checks include:

* non-standard variable names
* missing variables
* min and max values
* summation groups (using the |+| notation)
* units

Finally, using a gitlab integration hook an overview of the results is automatically reported to the developer team via email.
A scheduled and recurring job runs on the cluster every weekend, starting model runs according to a predefined set of scenarios. Once the runs are finished the job creates an overview of the results using [rs2](http://htmlpreview.github.io/?https://github.com/pik-piam/modelstats/blob/master/vignettes/rs2.html), and (optionally) additionally runs [compareScenarios](https://github.com/pik-piam/remind2/blob/master/R/compareScenarios.R) comparing each run with the previous run with the same scenario name (the comparison PDF files are found in each scenario folder, they also contain comparisons with historical data). It also uploads all runs to the [shinyResults::appResults](https://github.com/pik-piam/shinyresults/blob/master/R/appResults.R) app for easy and interactive viewing. Also here a comparison with historical data is found. Finally, using a gitlab integration hook an overview of the results is automatically reported to the developer team via email.

## How to use/contribute:

If the report contains runs that failed, or output that looks wrong (or has not been generated at all) someone has to take action. If there are cases for which you feel responsible or in the position to fix, please do so. This early warning system ensures stable workflows for all members of the team.

If you want your own scenarios to be included in future tests: For REMIND, add your scenarios to the `scenario_config.csv`, add `AMT` in the `start` column and simply commit it to the model's develop branch (similar for MAgPIE). They will be automatically executed the next time the tests are run.

Tailor the IAMC-style checks to your needs or add further IAMC-style checks here (read the vignette to see how this is done): http://htmlpreview.github.io/?https://github.com/pik-piam/piamModelTests/blob/master/vignettes/iamc.html

For wishes, changes etc, please contact RSE or open a new issue here: https://github.com/pik-piam/modelstats/issues
362 changes: 285 additions & 77 deletions vignettes/testingSuite.html

Large diffs are not rendered by default.

0 comments on commit 0a19691

Please sign in to comment.