Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does Optimization.jl (already) work with Manopt.jl? #814

Open
kellertuer opened this issue Sep 2, 2024 · 6 comments
Open

Does Optimization.jl (already) work with Manopt.jl? #814

kellertuer opened this issue Sep 2, 2024 · 6 comments
Labels
question Further information is requested

Comments

@kellertuer
Copy link

In the Readme, Manopt.jl is not yet mentioned nor on the start page https://docs.sciml.ai/Optimization/stable/, so I am not sure how far this has come.

There is a documentation page at https://docs.sciml.ai/Optimization/stable/optimization_packages/manopt/, but for every run it seems to report it failed? So then it does not work? If it does not work we should maybe remove the docs page again? What is missing to that it would work?

@kellertuer
Copy link
Author

I further noticed that this file

https://github.com/SciML/Optimization.jl/blob/master/lib/OptimizationManopt/src/OptimizationManopt.jl

never uses anything from Manifolds.jl, so it would maybe be good to remove that from the dependencies (it is a weak dependency of Manopt only as well)

@Vaibhavdixit02
Copy link
Member

The Manopt check for convergence

opt_ret = Manopt.indicates_convergence(asc) ? ReturnCode.Success : ReturnCode.Failure
doesn't seem to return true for any of those examples so our return code is Failure. That has no relation to whether it "work"s because that IMO is whether or not you can use the underlying solver, the answer to which is yes you can.

@Vaibhavdixit02
Copy link
Member

I further noticed that this file

https://github.com/SciML/Optimization.jl/blob/master/lib/OptimizationManopt/src/OptimizationManopt.jl

never uses anything from Manifolds.jl, so it would maybe be good to remove that from the dependencies (it is a weak dependency of Manopt only as well)

Good point, that can be removed

@kellertuer
Copy link
Author

The Manopt check for convergence
doesn't seem to return true for any of those examples so our return code is Failure. That has no relation to whether it "work"s because that IMO is whether or not you can use the underlying solver, the answer to which is yes you can.

I do not agree. Since currently all report “Failure” that indicates to the not-so-experienced reader, that none of the examples worked.
The actual reason is that probably with the current settings all of the solvers stopped with their maxiter-fallback, which surely does not indicate proper convergence and hence the code line you mention returns false.

Maybe the example could be improved then?
For example not only is Frank Wolfe a bit slow, the way it seems to be set up in the experiment is just a maxiter stopping criterion? Sure that never will return true, because it will always only stop due to hitting maxiter which says nothing about convergence.
Rosenbrock with gradient descent is probably the same. Gradient descent is super slow in convergence there.
For the last I am not so sure.

@Vaibhavdixit02
Copy link
Member

Thanks for the suggestions @kellertuer, I'll work on updating the stopping criteria to be more comprehensive for the Manopt wrapper

@kellertuer
Copy link
Author

Nice. For now Manopt mainly has the indicates_convergence function that is basically binary – if you would prefer something more detailed, we can check, what best to provide. For now I would prefer to avoid “magic number functions”, that is a function that returns 232 if a minimum was found but 12121128 if none was found. Ok, I am exaggerating, but I have seen these status codes (1-8 or so), which I would prefer to not do; but any other idea, we can check.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants