Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPOPT now returns Lagrange multpliers using the same sign convention as SNOPT. #416

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

robfalck
Copy link
Contributor

@robfalck robfalck commented Nov 13, 2024

Purpose

Added lagrange multipliers to IPOPT's output. Note that in the call to _createSolution we negate the multiplier values so that they are consistent with those returned by SNOPT. My understanding is that this is a matter of convention (due to the internal representations of the problems in the two optimizers).

The lagrange multipliers in the string representation of the solution are now correct.

Resolves #415

Expected time until merged

This change is not urgent.

Type of change

  • Bugfix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (non-backwards-compatible fix or feature)
  • Code style update (formatting, renaming)
  • Refactoring (no functional changes, no API changes)
  • Documentation update
  • Maintenance update
  • Other (please describe)

Testing

I've added a test to hs071 that verifies that the lagrange multipliers returned by SNOPT and IPOPT are correct.

Checklist

  • I have run flake8 and black to make sure the Python code adheres to PEP-8 and is consistently formatted
  • I have formatted the Fortran code with fprettify or C/C++ code with clang-format as applicable
  • I have run unit and regression tests which pass locally with my changes
  • I have added new tests that prove my fix is effective or that my feature works
  • I have added necessary documentation

…ution.

Added lagrange multipliers to IPOPT's output. Note that in the call to _createSolution we negate the multiplier values so that they are consistent with those returned by SNOPT.
My understanding is that this is a matter of convention (due to the internal representations of the problems in the two optimizers).
@robfalck robfalck requested a review from a team as a code owner November 13, 2024 22:43
@marcomangano
Copy link
Contributor

marcomangano commented Nov 13, 2024

Thanks for opening the PR! I also got curious and worked out a logic fix but you were quicker than me - see this comparison. I don't have strong opinions between that approach and just add a flag as you did, I actually like yours a bit more because it is more explicit. @lamkina do you have any suggestions here?

It looks like the tests are failing because Paropt also returns multipliers, so you have to add it to the new test. (EDIT: you were quicker again!) I believe you are using a different setting for black, we allow 120-characters-long lines, hence the formatting fails.

As you see from my branch, I also figured that the IPOPT convention is flipped and changed the sign of the multipliers, so I am 100% onboard with what you did.

Copy link

codecov bot commented Nov 13, 2024

Codecov Report

Attention: Patch coverage is 80.00000% with 1 line in your changes missing coverage. Please review.

Project coverage is 63.30%. Comparing base (f0db0af) to head (4cb9a90).

Files with missing lines Patch % Lines
pyoptsparse/pyOpt_optimization.py 66.66% 1 Missing ⚠️

❗ There is a different number of reports uploaded between BASE (f0db0af) and HEAD (4cb9a90). Click for more details.

HEAD has 1 upload less than BASE
Flag BASE (f0db0af) HEAD (4cb9a90)
4 3
Additional details and impacted files
@@             Coverage Diff             @@
##             main     #416       +/-   ##
===========================================
- Coverage   74.92%   63.30%   -11.63%     
===========================================
  Files          22       22               
  Lines        3338     3338               
===========================================
- Hits         2501     2113      -388     
- Misses        837     1225      +388     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.


🚨 Try these New Features:

@robfalck
Copy link
Contributor Author

I'm not sure where the regression in coverage is coming in. As far as I can tell it's hitting all of the lines I added. It might be due to black reformatting a few of the things.

@robfalck
Copy link
Contributor Author

You're right that my initial use of black was a little too agressive (no 120 char limit specification). I dialed that back so it only affected the files that I needed to change.

I'm still failing codecov and would appreciate guidance on what you'd like me to do.

@marcomangano
Copy link
Contributor

Sorry I just noticed your issues with Codecov - you actually don't need to do anything there! Since the PR comes from a fork and not from mdolab, the tests are ran on the publicly available Docker images that do not include SNOPT and NLPQLP, hence the reduction in coverage. Not passing the coverage test does not block the merge.

Copy link
Contributor

@marcomangano marcomangano left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I verified that the missing tests pass locally, this looks good to me, thanks!

@ewu63
Copy link
Collaborator

ewu63 commented Nov 19, 2024

This is a bit of a philosophical question, but do we have references that show IPOPT uses a different sign convention from SNOPT? I vaguely recall SNOPT itself using the opposite sign of normal convention, but it's been a while.

If we are modifying values that are coming out of the optimizers, this needs to be clearly documented. I'm not entirely against it, but this can easily confuse downstream users and create a mess. If we want to ensure that all Lagrange multipliers are consistent across optimizers, then I think what we need then is the following:

  1. A pyOptSparse definition of Lagrange multipliers with the sign that we will use
  2. A note in the documentation page for each optimizer where we flip the sign that is output from the optimizers
  3. A clear note/warning explaining this in the docstring of lambdaStar field
  4. A minor version bump

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Lagrange multiplier printout in summary output is broken.
3 participants