Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove some fv3fit test cases, mark others as slow #2109

Merged
merged 6 commits into from
Nov 9, 2022

Conversation

oliverwm1
Copy link
Contributor

@oliverwm1 oliverwm1 commented Nov 8, 2022

The fv3fit unit tests are slow (roughly 10 minutes). This PR makes a small attempt to ease the test burden.

  • It removes some particular parameter combinations for the test_cli test (saved 2 minutes on my Mac)
  • uses smaller dataset sizes for novelty detectors tests (about 20x speedup for module, saving 30-40s)
  • marks a couple slow (cyclegan/fmr) tests as slow.
  • deletes an unused requirement (pytest-mpl).

Requirement changes:

  • Removed pytest-mpl from pip-requirements.txt
  • Ran make lock_deps/lock_pip following these instructions
  • Add PR review with license info for any additions to constraints.txt
    (example)

Improves situation described in #2057

Coverage reports (updated automatically):

  • test_unit: 82%

Do not run all combinations of use_local_download_path and
use_validation_data for all model types. These options should
be independent of the model type (I think) so it seems excessive
to test all cases for all model types. The test is now being run
4 times instead of 12 times. This saves about 2 minutes.
@@ -11,6 +11,7 @@ altair==4.1.0
anyio==2.2.0
apache-beam==2.37.0
appdirs==1.4.4
appnope==0.1.3
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BSD 2-clause

Previously these tests took 5-15s each. Now they are all
under 0.5s and no longer need to be marked "slow"
Copy link
Contributor

@AnnaKwa AnnaKwa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the cleanups, mostly LGTM! I just had one request for keeping the model_type options to have the same control config in test_cli and keeping a few of the deleted combinations. I don't need to rereview that.

@@ -211,10 +211,6 @@ update_submodules:
external/fv3gfs-fortran \


overwrite_baseline_images:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was this just left over from something that was deleted a while ago?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, didn't we briefly have matplotlib image tests at some point?

pytest.param("convolutional", {}, ["dQ1", "dQ2"], id="convolutional"),
pytest.param("precipitative", {}, ["dQ1", "dQ2"], id="precipitative"),
pytest.param("dense", {}, ["dQ1", "dQ2"], id="dense"),
pytest.param(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you keep the set of 4 model types with the same setting for use_local_download_path, use_validation_data and pick 1 of the models to have the 3 additional combinations of those args with the appropriate ids? That still cuts down on most of the unnecessary tests but at least if something breaks it'll be clearer whether it was the model or the data loading.

@oliverwm1 oliverwm1 merged commit 4718140 into master Nov 9, 2022
@oliverwm1 oliverwm1 deleted the speed-fv3fit-tests branch November 9, 2022 19:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants