Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

We need CI for these scripts #5

Open
makeclean opened this issue Jul 1, 2019 · 9 comments
Open

We need CI for these scripts #5

makeclean opened this issue Jul 1, 2019 · 9 comments

Comments

@makeclean
Copy link
Contributor

There is the risk (as I discovered) of these scripts becoming stale - they should be under CI

@shimwell
Copy link
Member

shimwell commented Aug 9, 2019

I could change the scripts so that they are classes.
Unit test all the class methods and properties

Change the argparse so that It is a function called by main and test as described here

There is lots of code reproduction across the different convert scripts so perhaps they are ready to be combined a bit more

@shimwell
Copy link
Member

shimwell commented Aug 9, 2019

Perhaps circle ci with an openmc docker image could also be used to download and process the nuclear data. I am not sure how much can be covered within the time limit

@shimwell
Copy link
Member

Perhaps the CI should test only the updated scripts as we can't test them all the scripts within the time limits

@tim-fry
Copy link
Contributor

tim-fry commented Mar 5, 2020

I think the question to ask is what exactly do we want to check/get from the ci testing?

Is is to test the whole end to end operation of the scripts?

As Jonathan has already said, that's probably too time consuming for cloud based ci but you might be able to set something up with a restricted set of input files (a cut down source library) but then there are more questions about how to determine a pass/fail. I suppose we could use h5diff to compare generated hdf5 files against approved references.

Do we want to check that the conversion code continues to work?

Again as Jonathan has suggested, the scripts could be refactored to base them around classes which use shared methods, which could then be unit tested using a set of sample input files in different formats. We might be able to get similar benefits from checking the coverage of the from_* methods in the main openmc library and adding unit tests to check that all of the relevant load/convert methods are being unit tested and closing any holes.

Or is it to check that the source download links are valid and that download functionality still works?

To keep an eye on the type of issue mentioned above, we could create a script or test which checks that the download files are present and possibly tries to download the first few bytes (a full download & checksum might be overkill). There is already a set of download handler unit tests in the main library so these could be expanded as necessary.

Given that the readme of the repository suggests these scripts are only for users who want something different to the standard libraries which can be downloaded pre-built, personally I'd certainly suggest a serious look at refactoring the scripts and I'd definitely look at covering any load/convert methods in the main library. I think that full end-to-end testing is a bit more than can be managed in the cloud and I wouldn't worry about checking download links, as ultimately you can't stop 3rd parties changing locations or server configurations, although it would give you an early warning if something were to change.

@paulromano
Copy link
Contributor

I'm personally not overly worried about testing of these scripts. Most of the "meat" is really in the conversion methods like IncidentNeutron.from_ace, IncidentPhoton.from_endf, etc. These scripts are really just collecting files from various sources, converting them, and then saving the output somewhere. Personally, I only run these scripts when I have a need to update one of the existing libraries (e.g., when the HDF5 format in OpenMC changes) or if a new ENDF library comes out, so to my knowledge these are not getting run all the time. Most users of OpenMC should be entirely fine just downloading the pre-generated libraries that we distribute on openmc.org -- for those who do have specific needs, maybe a particular set of temperatures that they really want data at, these scripts give them a nice blueprint for what they might need to do.

That being said, I think the suggestions are good -- refactoring these scripts so that they rely on some shared infrastructure is of course beneficial for a variety of reasons.

@simondrichards
Copy link
Contributor

simondrichards commented Mar 13, 2020

It seems like there are two different requirements here: 1) check that modifications to scripts don't inadvertently break their functionality; and 2) check that changes in the upstream data sources don't break the scripts.

For the first requirement, the suggestion of refactoring the scripts to use shared code, coupled with a good set of unit tests seems like the way forward.

For the second, the functionality of the script could get broken even if no changes are made, so we'd want periodic (e.g. nightly) CI rather than something initiated by committing code. But even if it were possible to download and process all of the data within the time limit, that doesn't sound like a very responsible use of resources to me. So I favour the idea of downloading and processing a small subset of each library, where that is possible. Of course if you have to download a large archive file before you can start doing anything with it, there's not much you can do in the way of testing without doing the whole download, but you don't necessarily have to process it all. Perhaps we could have an optional argument to indicate a "CI" mode which processes a subset of the data, with a reference HDF5 output file containing the cutdown generated library.

It sounds like this isn't considered to be a major issue so the pragmatic approach is probably to refactor with unit tests, and use nightly CI to check that the upstream source files exist, and perhaps download a small number and do some reduced processing on them. And then just accept that the CI isn't going to catch every weird and wonderful way that the upstream data sources can change things to hurt you.

I'm guessing they don't change things very often, so hammering on their servers every night just seems overkill.

@shimwell
Copy link
Member

Came across this nice pytest addition. It can be used to check which tests need to be run taking into account which files have changed.

https://pypi.org/project/pytest-testmon/

I think this together with GitHub actions could probably cover the convert scripts. The generate scripts require quite a lot more time to process.

Pytest-watch could also be mixed in

@shimwell
Copy link
Member

shimwell commented Apr 5, 2021

Just wanted to mention I have used github actions to download and test (use in simulation)

tendl 2019 https://github.com/openmc-data-storage/TENDL-2019
endf b 7.1 https://github.com/openmc-data-storage/ENDF-B-VII.1-NNDC
fendl 3.1d https://github.com/openmc-data-storage/FENDL-3.1d
jeff 3.2 https://github.com/openmc-data-storage/JEFF-3.2

I also accidentally caught a bug with fendl 3.1d script as the URL was changed by the hoster.

@shimwell
Copy link
Member

I've written some CI that selectivity runs convert_*.py scripts if their contents change.

This allows us to avoid downloading, extracting and processing all the scripts every time but still allows us to check the scripts work if there are changes to the script.

In addition to this I've got a framework started that allows for testing of the download url with out actually downloading the scripts. This is implemented for fendl and tendl currently but could be rolled out to others

More details over here on this PR for this fork (which I'm keen to merge in once it is more finished and if it is acceptable)

@shimwell shimwell linked a pull request Jul 1, 2022 that will close this issue
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants