-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
We need CI for these scripts #5
Comments
I could change the scripts so that they are classes. Change the argparse so that It is a function called by main and test as described here There is lots of code reproduction across the different convert scripts so perhaps they are ready to be combined a bit more |
Perhaps circle ci with an openmc docker image could also be used to download and process the nuclear data. I am not sure how much can be covered within the time limit |
Perhaps the CI should test only the updated scripts as we can't test them all the scripts within the time limits |
I think the question to ask is what exactly do we want to check/get from the ci testing? Is is to test the whole end to end operation of the scripts? As Jonathan has already said, that's probably too time consuming for cloud based ci but you might be able to set something up with a restricted set of input files (a cut down source library) but then there are more questions about how to determine a pass/fail. I suppose we could use h5diff to compare generated hdf5 files against approved references. Do we want to check that the conversion code continues to work? Again as Jonathan has suggested, the scripts could be refactored to base them around classes which use shared methods, which could then be unit tested using a set of sample input files in different formats. We might be able to get similar benefits from checking the coverage of the from_* methods in the main openmc library and adding unit tests to check that all of the relevant load/convert methods are being unit tested and closing any holes. Or is it to check that the source download links are valid and that download functionality still works? To keep an eye on the type of issue mentioned above, we could create a script or test which checks that the download files are present and possibly tries to download the first few bytes (a full download & checksum might be overkill). There is already a set of download handler unit tests in the main library so these could be expanded as necessary. Given that the readme of the repository suggests these scripts are only for users who want something different to the standard libraries which can be downloaded pre-built, personally I'd certainly suggest a serious look at refactoring the scripts and I'd definitely look at covering any load/convert methods in the main library. I think that full end-to-end testing is a bit more than can be managed in the cloud and I wouldn't worry about checking download links, as ultimately you can't stop 3rd parties changing locations or server configurations, although it would give you an early warning if something were to change. |
I'm personally not overly worried about testing of these scripts. Most of the "meat" is really in the conversion methods like That being said, I think the suggestions are good -- refactoring these scripts so that they rely on some shared infrastructure is of course beneficial for a variety of reasons. |
It seems like there are two different requirements here: 1) check that modifications to scripts don't inadvertently break their functionality; and 2) check that changes in the upstream data sources don't break the scripts. For the first requirement, the suggestion of refactoring the scripts to use shared code, coupled with a good set of unit tests seems like the way forward. For the second, the functionality of the script could get broken even if no changes are made, so we'd want periodic (e.g. nightly) CI rather than something initiated by committing code. But even if it were possible to download and process all of the data within the time limit, that doesn't sound like a very responsible use of resources to me. So I favour the idea of downloading and processing a small subset of each library, where that is possible. Of course if you have to download a large archive file before you can start doing anything with it, there's not much you can do in the way of testing without doing the whole download, but you don't necessarily have to process it all. Perhaps we could have an optional argument to indicate a "CI" mode which processes a subset of the data, with a reference HDF5 output file containing the cutdown generated library. It sounds like this isn't considered to be a major issue so the pragmatic approach is probably to refactor with unit tests, and use nightly CI to check that the upstream source files exist, and perhaps download a small number and do some reduced processing on them. And then just accept that the CI isn't going to catch every weird and wonderful way that the upstream data sources can change things to hurt you. I'm guessing they don't change things very often, so hammering on their servers every night just seems overkill. |
Came across this nice pytest addition. It can be used to check which tests need to be run taking into account which files have changed. https://pypi.org/project/pytest-testmon/ I think this together with GitHub actions could probably cover the convert scripts. The generate scripts require quite a lot more time to process. Pytest-watch could also be mixed in |
Just wanted to mention I have used github actions to download and test (use in simulation) tendl 2019 https://github.com/openmc-data-storage/TENDL-2019 I also accidentally caught a bug with fendl 3.1d script as the URL was changed by the hoster. |
I've written some CI that selectivity runs convert_*.py scripts if their contents change. This allows us to avoid downloading, extracting and processing all the scripts every time but still allows us to check the scripts work if there are changes to the script. In addition to this I've got a framework started that allows for testing of the download url with out actually downloading the scripts. This is implemented for fendl and tendl currently but could be rolled out to others More details over here on this PR for this fork (which I'm keen to merge in once it is more finished and if it is acceptable) |
There is the risk (as I discovered) of these scripts becoming stale - they should be under CI
The text was updated successfully, but these errors were encountered: