Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add OptimizationOptimJL.Optim.BFGS() tests for issues 747, 754 #758

Closed
wants to merge 1 commit into from

Conversation

sjdaines
Copy link
Contributor

Failing test for #754
Passing test for #747

Partially addresses #755

NB: unrelated problem, tests using Optimization.AutoModelingToolkit() were failing, added code
@test (sol = solve(prob, Optim.Newton())) isa Any # test exception not thrown
and similar so exception is caught and other tests run

Checklist

  • Appropriate tests were added
  • Any code changes were done in a way that does not break public API
  • All documentation related to code changes were updated
  • The new code follows the
    contributor guidelines, in particular the SciML Style Guide and
    COLPRAC.
  • Any new documentation only uses public API

Additional context

Add any other context about the problem here.

SciML#754
SciML#747

partially addressing SciML#755

NB: unrelated problem, tests using Optimization.AutoModelingToolkit() were failing,
added code
    @test (sol = solve(prob, Optim.Newton())) isa Any  # test exception not thrown
and similar so exception is caught and other tests run
@ChrisRackauckas
Copy link
Member

Failing test for #754

I don't see an @test_broken in the PR?

Copy link

codecov bot commented May 21, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 67.46%. Comparing base (e2cad3d) to head (764c066).
Report is 78 commits behind head on master.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #758      +/-   ##
==========================================
+ Coverage   63.85%   67.46%   +3.61%     
==========================================
  Files          22       22              
  Lines        1527     1632     +105     
==========================================
+ Hits          975     1101     +126     
+ Misses        552      531      -21     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@sjdaines
Copy link
Contributor Author

sjdaines commented May 21, 2024

Converted to draft PR as I think there's a couple of problems here with how the tests are organized, which are defeating attempts to maintain the tests (including mark tests with @test_broken)

  1. Errors due to AutoModelingToolkit (every case is erroring)
  2. The part that fails is often outside @test, which then causes the whole test file to error, eg
    optprob = OptimizationFunction(rosenbrock, Optimization.AutoModelingToolkit())
    prob = OptimizationProblem(optprob, x0, _p)
    sol = solve(prob, Optim.BFGS())
    @test 10 * sol.objective < l1
    the error is generated by solve on L161. This then errors out to the top level, so that the rest of the tests do not run.

Is there some example of best practice for how tests like this should be structured, so tests are independent ie a failure in one test is contained so the rest are still run ? Use @testset ? Some convenient way to achieve @test_nothrow ?

Looks like the net result of 1. and 2. combined is that the tests for OptimizationOptimJL have been failing for the last few releases.

@sjdaines
Copy link
Contributor Author

Closing this, as a restructuring of the tests needs looking at more systematically, and new tests really belong in the PRs with the fixes eg #759 (I've added a comment there linking to the suggested new tests from this PR)

@sjdaines sjdaines closed this May 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants