You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have many tests now, which are being run on every PR.
While this is good for staying consistent, it makes a lot of unnecessary testing occuring: when some small correction is applied to documentation, we still check that all of our models are downloadable etc.
Probably there is a way to separate our test suites (like we already do for snowglobes, assigning pytest.mark in the test files) into separate sets, which would be run on the PR only when it has a certain label.
Something like that:
default tests for all PRs (some basic checks without many computations)
ModelRegistry - test that models can be downloaded
SNOwGLoBES - test rate calculation for some of the models (no need to download or use all of them)
SupernovaModel - test that all models can be instantiated etc.
documentation - try to build the documentation and probably provide it as an artifact of the PR
etc
Also probably there is a way to define the tests suites based on the files, modified in the PR.
This can reduce the usage of our runners, and address #238
Maybe you can add more ideas
The text was updated successfully, but these errors were encountered:
Sheshuk
added
the
suggestion
An idea that needs to be discussed/approved before starting implementaion
label
Jun 3, 2023
We have many tests now, which are being run on every PR.
While this is good for staying consistent, it makes a lot of unnecessary testing occuring: when some small correction is applied to documentation, we still check that all of our models are downloadable etc.
Probably there is a way to separate our test suites (like we already do for snowglobes, assigning
pytest.mark
in the test files) into separate sets, which would be run on the PR only when it has a certain label.Something like that:
ModelRegistry
- test that models can be downloadedSNOwGLoBES
- test rate calculation for some of the models (no need to download or use all of them)SupernovaModel
- test that all models can be instantiated etc.documentation
- try to build the documentation and probably provide it as an artifact of the PRAlso probably there is a way to define the tests suites based on the files, modified in the PR.
This can reduce the usage of our runners, and address #238
Maybe you can add more ideas
The text was updated successfully, but these errors were encountered: