Skip to content

Commit

Permalink
New tutorials and docs
Browse files Browse the repository at this point in the history
  • Loading branch information
Vaibhavdixit02 committed Jul 11, 2024
1 parent e987377 commit ea76fab
Show file tree
Hide file tree
Showing 5 changed files with 181 additions and 25 deletions.
44 changes: 21 additions & 23 deletions docs/src/getting_started.md
Original file line number Diff line number Diff line change
@@ -1,41 +1,39 @@
# Getting Started with Optimization in Julia
# Getting Started with Optimization.jl

In this tutorial, we introduce the basics of Optimization.jl by showing
how to easily mix local optimizers from Optim.jl and global optimizers
from BlackBoxOptim.jl on the Rosenbrock equation. The simplest copy-pasteable
code to get started is the following:
how to easily mix local optimizers and global optimizers on the Rosenbrock equation.
The simplest copy-pasteable code using a quasi-Newton method (LBFGS) to solve the Rosenbrock problem is the following:

```@example intro
# Import the package and define the problem to optimize
using Optimization
using Optimization, Zygote
rosenbrock(u, p) = (p[1] - u[1])^2 + p[2] * (u[2] - u[1]^2)^2
u0 = zeros(2)
p = [1.0, 100.0]
prob = OptimizationProblem(rosenbrock, u0, p)
# Import a solver package and solve the optimization problem
using OptimizationOptimJL
sol = solve(prob, NelderMead())
optf = OptimizationFunction(rosenbrock, AutoZygote())
prob = OptimizationProblem(optf, u0, p)
# Import a different solver package and solve the optimization problem a different way
using OptimizationBBO
prob = OptimizationProblem(rosenbrock, u0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0])
sol = solve(prob, BBO_adaptive_de_rand_1_bin_radiuslimited())
sol = solve(prob, Optimization.LBFGS())
```

Notice that Optimization.jl is the core glue package that holds all the common
pieces, but to solve the equations, we need to use a solver package. Here, OptimizationOptimJL
is for [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl) and OptimizationBBO is for
[BlackBoxOptim.jl](https://github.com/robertfeldt/BlackBoxOptim.jl).
## Import a different solver package and solve the problem

OptimizationOptimJL is a wrapper for [Optim.jl](https://github.com/JuliaNLSolvers/Optim.jl) and OptimizationBBO is a wrapper for [BlackBoxOptim.jl](https://github.com/robertfeldt/BlackBoxOptim.jl).

The output of the first optimization task (with the `NelderMead()` algorithm)
is given below:
First let's use the NelderMead a derivative free solver from Optim.jl:

```@example intro
optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff())
prob = OptimizationProblem(optf, u0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0])
sol = solve(prob, NelderMead())
using OptimizationOptimJL
sol = solve(prob, Optim.NelderMead())
```

BlackBoxOptim.jl offers derivative-free global optimization solvers that requrie the bounds to be set via `lb` and `ub` in the `OptimizationProblem`. Let's use the BBO_adaptive_de_rand_1_bin_radiuslimited() solver:

```@example intro
using OptimizationBBO
prob = OptimizationProblem(rosenbrock, u0, p, lb = [-1.0, -1.0], ub = [1.0, 1.0])
sol = solve(prob, BBO_adaptive_de_rand_1_bin_radiuslimited())
```

The solution from the original solver can always be obtained via `original`:
Expand Down
5 changes: 3 additions & 2 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,9 @@ Pkg.add("Optimization")

The packages relevant to the core functionality of Optimization.jl will be imported
accordingly and, in most cases, you do not have to worry about the manual
installation of dependencies. However, you will need to add the specific optimizer
packages.
installation of dependencies. [Optimization.jl](@ref) natively offers a LBFGS solver
but for more solver choices (discussed below in Optimization Packages), you will need
to add the specific wrapper packages.

## Contributing

Expand Down
39 changes: 39 additions & 0 deletions docs/src/optimization_packages/optimization.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# Optimization.jl

There are some solvers that are available in the Optimization.jl package directly without the need to install any of the solver wrappers.

## Methods

`LBFGS`: The popular quasi-Newton method that leverages limited memory BFGS approximation of the inverse of the Hessian. Through a wrapper over the [L-BFGS-B](https://users.iems.northwestern.edu/%7Enocedal/lbfgsb.html) fortran routine accessed from the [LBFGSB.jl](https://github.com/Gnimuc/LBFGSB.jl/) package. It directly supports box-constraints.

This can also handle arbitrary non-linear constraints through a Augmented Lagrangian method with bounds constraints described in 17.4 of Numerical Optimization by Nocedal and Wright. Thus serving as a general-purpose nonlinear optimization solver available directly in Optimization.jl.

## Examples

### Unconstrained rosenbrock problem

```@example L-BFGS
using Optimization, Zygote
rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
x0 = zeros(2)
p = [1.0, 100.0]
optf = OptimizationFunction(rosenbrock, AutoZygote())
prob = Optimization.OptimizationProblem(optf, x0, p)
sol = solve(prob, Optimization.LBFGS())
```

### With nonlinear and bounds constraints

```@example L-BFGS
function con2_c(res, x, p)
res .= [x[1]^2 + x[2]^2, (x[2] * sin(x[1]) + x[1]) - 5]
end
optf = OptimizationFunction(rosenbrock, AutoZygote(), cons = con2_c)
prob = OptimizationProblem(optf, x0, p, lcons = [1.0, -Inf],
ucons = [1.0, 0.0], lb = [-1.0, -1.0],
ub = [1.0, 1.0])
res = solve(prob, Optimization.LBFGS(), maxiters = 100)
```
88 changes: 88 additions & 0 deletions docs/src/tutorials/certification.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
# Using SymbolicAnalysis.jl for convexity certificates

In this tutorial, we will show how to use automatic convexity certification of the optimization problem using [SymbolicAnalysis.jl](https://github.com/Vaibhavdixit02/SymbolicAnalysis.jl).

This works with the `structural_analysis` keyword argument to `OptimizationProblem`. This tells the package to try to trace through the objective and constraints with symbolic variables (for more details on this look at the [Symbolics documentation](https://symbolics.juliasymbolics.org/stable/manual/functions/#function_registration)). This relies on the Disciplined Programming approach hence neccessitates the use of "atoms" from the SymbolicAnalysis.jl package.

<!-- Let's look at a simple long-only Markowitz portfolio optimization problem.
```math
\begin{alig}
\text{minimize} &x^{T}\Sigma x
\text{subject to} &p^{T}x \geq r_{min}
&\emp{1}^{T}x = 1
&x \geq 0
\end{align}
```
We'll use the MTK symbolic interface to define the problem.
```@example symanalysis
using SymbolicAnalysis, Zygote, LinearAlgebra, Optimization, OptimizationMOI
prices = rand(5)
Σsqrt = rand(5,5)
Σ = Σsqrt*Σsqrt'
r_min = 1.0
function objective(x, p=nothing)
return SymbolicAnalysis.quad_form(x, Σ)
end
function cons(res, x, p = nothing)
res[1] = (x'*prices)[1] - r_min
res[2] = (ones(1, 5)*x)[1] - 1.0
end
optf = OptimizationFunction(objective, Optimization.AutoZygote(); cons = cons)
x0unnorm = rand(5)
x0 = x0unnorm./sum(x0unnorm)
prob = OptimizationProblem(optf, x0, lcons = [-Inf, 0.0], ucons = [0.0, 0.0], structural_analysis = true)
sol = solve(prob, Optimization.LBFGS())
```
-->

We'll use a simple example to illustrate the convexity structure certification process.

```@example symanalysis
using SymbolicAnalysis, Zygote, LinearAlgebra, Optimization, OptimizationMOI
function f(x, p = nothing)
return exp(x[1]) + x[1]^2
end
optf = OptimizationFunction(f, Optimization.AutoForwardDiff())
prob = OptimizationProblem(optf, [0.4], structural_analysis = true)
sol = solve(prob, Optimization.LBFGS(), maxiters = 1000)
```

The result can be accessed as the `analysis_results` field of the solution.

```@example symanalysis
sol.cache.analysis_results.objective
```

Relatedly you can enable structural analysis in Riemannian optimization problems (supported only on the SPD manifold).

We'll look at the Riemannian center of mass of SPD matrices which is known to be a Geodesically Convex problem on the SPD manifold.

```@example symanalysis
using Optimization, OptimizationManopt, Symbolics, Manifolds, Random, LinearAlgebra, SymbolicAnalysis
M = SymmetricPositiveDefinite(5)
m = 100
σ = 0.005
q = Matrix{Float64}(LinearAlgebra.I(5)) .+ 2.0
data2 = [exp(M, q, σ * rand(M; vector_at = q)) for i = 1:m];
f(x, p = nothing) = sum(SymbolicAnalysis.distance(M, data2[i], x)^2 for i = 1:5)
optf = OptimizationFunction(f, Optimization.AutoZygote())
prob = OptimizationProblem(optf, data2[1]; manifold = M, structural_analysis = true)
opt = OptimizationManopt.GradientDescentOptimizer()
sol = solve(prob, opt, maxiters = 100)
```
30 changes: 30 additions & 0 deletions docs/src/tutorials/ensemble.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
# Multistart optimization with EnsembleProblem

The `EnsembleProblem` in SciML serves as a common interface for running a problem on multiple sets of initializations. In the context
of optimization, this is useful for performing multistart optimization.

This can be useful for complex, low dimensional problems. We demonstrate this, again, on the rosenbrock function.

```@example ensemble
using Optimization, OptimizationOptimJL, Random
Random.seed!(100)
rosenbrock(x, p) = (p[1] - x[1])^2 + p[2] * (x[2] - x[1]^2)^2
x0 = zeros(2)
optf = OptimizationFunction(rosenbrock, Optimization.AutoForwardDiff())
prob = OptimizationProblem(optf, x0, [1.0, 100.0])
@time sol1 = Optimization.solve(prob, OptimizationOptimJL.BFGS(), maxiters = 5)
@show sol1.objective
ensembleprob = Optimization.EnsembleProblem(
prob, [x0, x0 .+ rand(2), x0 .+ rand(2), x0 .+ rand(2)])
@time sol = Optimization.solve(ensembleprob, OptimizationOptimJL.BFGS(),
EnsembleThreads(), trajectories = 4, maxiters = 5)
@show findmin(i -> sol[i].objective, 1:4)[1]
```

With the same number of iterations (5) we get a much lower (1/100th) objective value by using multiple initial points. The initialization strategy used here was a pretty trivial one but approaches based on Quasi-Monte Carlo sampling should be typically more effective.

0 comments on commit ea76fab

Please sign in to comment.