Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Greybox Objectives #3364

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open

Greybox Objectives #3364

wants to merge 7 commits into from

Conversation

michaelbynum
Copy link
Contributor

Summary/Motivation:

This PR adds support for objectives in PyNumero grey box models.

Legal Acknowledgement

By contributing to this software project, I have read the contribution guide and agree to the following terms and conditions for my contribution:

  1. I agree my contributions are submitted under the BSD license.
  2. I represent I am authorized to make the contributions and grant the license. If my employer has rights to intellectual property that includes these contributions, I represent that I have received permission to make contributions and grant the required license on behalf of that employer.

Copy link
Contributor

@Robbybp Robbybp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't reviewed changes in _evaluate_hessian_if_necessary_and_cache yet, but this looks fine.

Just curious, what is the motivation for doing this instead of just using outputs and putting them in the objective? Is there some motivating use case where this works better?

input_values
"""
raise NotImplementedError(
'evaluate_objective called but not ' 'implemented in the derived class.'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you fix black's weird auto-formatting here?

Comment on lines -367 to +382
return self._pyomo_nlp.evaluate_objective()
obj = 0
for nlp in self._nlps:
obj += nlp.evaluate_objective()
return obj

# overloaded from NLP
def evaluate_grad_objective(self, out=None):
return self._pyomo_nlp.evaluate_grad_objective(out=out)
ret = np.zeros(self.n_primals(), dtype=float)
for nlp in self._nlps:
ret += nlp.evaluate_grad_objective()

if out is not None:
ret.copyto(out)
return out

return ret
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to sum multiple objectives? At first thought, it seems like we should only allow one objective to be implemented. If each grey box intends to implement an objective term, maybe the methods on ExternalGreyBoxModel should be called e.g. evaluate_objective_term.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with Robby in principle that I don't always want to sum my objectives. But am happy to accept this as a first pass

@codykarcher
Copy link
Contributor

I haven't reviewed changes in _evaluate_hessian_if_necessary_and_cache yet, but this looks fine.

Just curious, what is the motivation for doing this instead of just using outputs and putting them in the objective? Is there some motivating use case where this works better?

This is me. For applications I work with, it's common for people to just come with a black box and say "minimize". Clearly that approach is not ideal, but allowing this capability will enable us to reach out to people who are used to doing optimization that way and help us convert them to the light side of the force...

@Robbybp
Copy link
Contributor

Robbybp commented Sep 18, 2024

For applications I work with, it's common for people to just come with a black box and say "minimize"

Okay, so this is basically just for convenience.

Clearly that approach is not ideal

I don't think it's clear at all. I'm usually a fan of eliminating auxiliary variables where possible, so this approach appeals to me. I was just curious if there was an example where it actually mattered for performance or reliability.

Copy link
Contributor

@codykarcher codykarcher left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, will test functionality over the next few weeks

@codykarcher
Copy link
Contributor

codykarcher commented Sep 27, 2024

For applications I work with, it's common for people to just come with a black box and say "minimize"

Okay, so this is basically just for convenience.

Clearly that approach is not ideal

I don't think it's clear at all. I'm usually a fan of eliminating auxiliary variables where possible, so this approach appeals to me. I was just curious if there was an example where it actually mattered for performance or reliability.

It is not ideal in the context of it hides structure I could potentially implement in my optimization problem. IE, what if my black box is hiding a convex optimization problem? But yea, it is a matter of convenience not performance.

@mrmundt
Copy link
Contributor

mrmundt commented Oct 8, 2024

@michaelbynum , @Robbybp - We are hoping to cut a patch release on Monday (the 14th). Any chance this is going to make it into that?

@mrmundt
Copy link
Contributor

mrmundt commented Nov 26, 2024

@michaelbynum , @Robbybp - Pinging on this PR again! Any update?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants