-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove some fv3fit test cases, mark others as slow #2109
Conversation
Do not run all combinations of use_local_download_path and use_validation_data for all model types. These options should be independent of the model type (I think) so it seems excessive to test all cases for all model types. The test is now being run 4 times instead of 12 times. This saves about 2 minutes.
@@ -11,6 +11,7 @@ altair==4.1.0 | |||
anyio==2.2.0 | |||
apache-beam==2.37.0 | |||
appdirs==1.4.4 | |||
appnope==0.1.3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BSD 2-clause
Previously these tests took 5-15s each. Now they are all under 0.5s and no longer need to be marked "slow"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the cleanups, mostly LGTM! I just had one request for keeping the model_type options to have the same control config in test_cli
and keeping a few of the deleted combinations. I don't need to rereview that.
@@ -211,10 +211,6 @@ update_submodules: | |||
external/fv3gfs-fortran \ | |||
|
|||
|
|||
overwrite_baseline_images: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was this just left over from something that was deleted a while ago?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, didn't we briefly have matplotlib image tests at some point?
pytest.param("convolutional", {}, ["dQ1", "dQ2"], id="convolutional"), | ||
pytest.param("precipitative", {}, ["dQ1", "dQ2"], id="precipitative"), | ||
pytest.param("dense", {}, ["dQ1", "dQ2"], id="dense"), | ||
pytest.param( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you keep the set of 4 model types with the same setting for use_local_download_path, use_validation_data
and pick 1 of the models to have the 3 additional combinations of those args with the appropriate ids? That still cuts down on most of the unnecessary tests but at least if something breaks it'll be clearer whether it was the model or the data loading.
The fv3fit unit tests are slow (roughly 10 minutes). This PR makes a small attempt to ease the test burden.
test_cli
test (saved 2 minutes on my Mac)pytest-mpl
).Requirement changes:
pytest-mpl
frompip-requirements.txt
make lock_deps/lock_pip
following these instructionsconstraints.txt
(example)
Improves situation described in #2057
Coverage reports (updated automatically):