Skip to content

Commit

Permalink
Merge pull request #459 from ArnoStrouwen/docs
Browse files Browse the repository at this point in the history
LanguageTool
  • Loading branch information
ChrisRackauckas authored Dec 29, 2023
2 parents 25d95f1 + e1fa015 commit c23f174
Show file tree
Hide file tree
Showing 31 changed files with 94 additions and 94 deletions.
2 changes: 1 addition & 1 deletion Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ Aqua = "0.8"
Cubature = "1.5"
Distributions = "0.25.71"
ExtendableSparse = "1"
Flux = "0.14"
Flux = "0.13, 0.14"
ForwardDiff = "0.10.19"
GLM = "1.5"
IterativeSolvers = "0.9"
Expand Down
8 changes: 4 additions & 4 deletions docs/src/BraninFunction.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
# Branin Function

The Branin Function is commonly used as a test function for metamodelling in computer experiments, especially in the context of optimization.
The Branin function is commonly used as a test function for metamodelling in computer experiments, especially in the context of optimization.

The expression of the Branin Function is given as:
``f(x) = (x_2 - \frac{5.1}{4\pi^2}x_1^{2} + \frac{5}{\pi}x_1 - 6)^2 + 10(1-\frac{1}{8\pi})\cos(x_1) + 10``

where ``x = (x_1, x_2)`` with ``-5\leq x_1 \leq 10, 0 \leq x_2 \leq 15``

First of all we will import these two packages `Surrogates` and `Plots`.
First of all, we will import these two packages: `Surrogates` and `Plots`.

```@example BraninFunction
using Surrogates
Expand Down Expand Up @@ -50,7 +50,7 @@ scatter!(xs, ys)
plot(p1, p2, title="True function")
```

Now it's time to try fitting different surrogates and then we will plot them.
Now it's time to try fitting different surrogates, and then we will plot them.
We will have a look at the radial basis surrogate `Radial Basis Surrogate`. :

```@example BraninFunction
Expand All @@ -65,7 +65,7 @@ scatter!(xs, ys, marker_z=zs)
plot(p1, p2, title="Radial Surrogate")
```

Now, we will have a look on `Inverse Distance Surrogate`:
Now, we will have a look at `Inverse Distance Surrogate`:
```@example BraninFunction
InverseDistance = InverseDistanceSurrogate(xys, zs, lower_bound, upper_bound)
```
Expand Down
10 changes: 5 additions & 5 deletions docs/src/InverseDistance.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
The **Inverse Distance Surrogate** is an interpolating method and in this method the unknown points are calculated with a weighted average of the sampling points. This model uses the inverse distance between the unknown and training points to predict the unknown point. We do not need to fit this model because the response of an unknown point x is computed with respect to the distance between x and the training points.
The **Inverse Distance Surrogate** is an interpolating method, and in this method, the unknown points are calculated with a weighted average of the sampling points. This model uses the inverse distance between the unknown and training points to predict the unknown point. We do not need to fit this model because the response of an unknown point x is computed with respect to the distance between x and the training points.

Let's optimize the following function to use Inverse Distance Surrogate:

Expand Down Expand Up @@ -53,7 +53,7 @@ plot!(InverseDistance, label="Surrogate function", xlims=(lower_bound, upper_bo

Having built a surrogate, we can now use it to search for minima in our original function `f`.

To optimize using our surrogate we call `surrogate_optimize` method. We choose to use Stochastic RBF as optimization technique and again Sobol sampling as sampling technique.
To optimize using our surrogate we call `surrogate_optimize` method. We choose to use Stochastic RBF as the optimization technique and again Sobol sampling as the sampling technique.

```@example Inverse_Distance1D
@show surrogate_optimize(f, SRBF(), lower_bound, upper_bound, InverseDistance, SobolSample())
Expand All @@ -65,7 +65,7 @@ plot!(InverseDistance, label="Surrogate function", xlims=(lower_bound, upper_bo

## Inverse Distance Surrogate Tutorial (ND):

First of all we will define the `Schaffer` function we are going to build surrogate for. Notice, one how its argument is a vector of numbers, one for each coordinate, and its output is a scalar.
First of all we will define the `Schaffer` function we are going to build a surrogate for. Notice, how its argument is a vector of numbers, one for each coordinate, and its output is a scalar.

```@example Inverse_DistanceND
using Plots # hide
Expand All @@ -84,7 +84,7 @@ end

### Sampling

Let's define our bounds, this time we are working in two dimensions. In particular we want our first dimension `x` to have bounds `-5, 10`, and `0, 15` for the second dimension. We are taking 60 samples of the space using Sobol Sequences. We then evaluate our function on all of the sampling points.
Let's define our bounds, this time we are working in two dimensions. In particular we want our first dimension `x` to have bounds `-5, 10`, and `0, 15` for the second dimension. We are taking 60 samples of the space using Sobol Sequences. We then evaluate our function on all the sampling points.

```@example Inverse_DistanceND
n_samples = 60
Expand Down Expand Up @@ -124,7 +124,7 @@ plot(p1, p2, title="Surrogate") # hide


### Optimizing
With our surrogate we can now search for the minima of the function.
With our surrogate, we can now search for the minima of the function.

Notice how the new sampled points, which were created during the optimization process, are appended to the `xys` array.
This is why its size changes.
Expand Down
10 changes: 5 additions & 5 deletions docs/src/LinearSurrogate.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ plot!(f, label="True function", xlims=(lower_bound, upper_bound))

## Building a Surrogate

With our sampled points we can build the **Linear Surrogate** using the `LinearSurrogate` function.
With our sampled points, we can build the **Linear Surrogate** using the `LinearSurrogate` function.

We can simply calculate `linear_surrogate` for any value.

Expand All @@ -51,7 +51,7 @@ plot!(my_linear_surr_1D, label="Surrogate function", xlims=(lower_bound, upper_

Having built a surrogate, we can now use it to search for minima in our original function `f`.

To optimize using our surrogate we call `surrogate_optimize` method. We choose to use Stochastic RBF as optimization technique and again Sobol sampling as sampling technique.
To optimize using our surrogate we call `surrogate_optimize` method. We choose to use Stochastic RBF as the optimization technique and again Sobol sampling as the sampling technique.

```@example linear_surrogate1D
@show surrogate_optimize(f, SRBF(), lower_bound, upper_bound, my_linear_surr_1D, SobolSample())
Expand All @@ -63,7 +63,7 @@ plot!(my_linear_surr_1D, label="Surrogate function", xlims=(lower_bound, upper_

## Linear Surrogate tutorial (ND)

First of all we will define the `Egg Holder` function we are going to build surrogate for. Notice, one how its argument is a vector of numbers, one for each coordinate, and its output is a scalar.
First of all we will define the `Egg Holder` function we are going to build a surrogate for. Notice, one how its argument is a vector of numbers, one for each coordinate, and its output is a scalar.

```@example linear_surrogateND
using Plots # hide
Expand Down Expand Up @@ -104,7 +104,7 @@ plot(p1, p2, title="True function") # hide
```

### Building a surrogate
Using the sampled points we build the surrogate, the steps are analogous to the 1-dimensional case.
Using the sampled points, we build the surrogate, the steps are analogous to the 1-dimensional case.

```@example linear_surrogateND
my_linear_ND = LinearSurrogate(xys, zs, lower_bound, upper_bound)
Expand All @@ -119,7 +119,7 @@ plot(p1, p2, title="Surrogate") # hide
```

### Optimizing
With our surrogate we can now search for the minima of the function.
With our surrogate, we can now search for the minima of the function.

Notice how the new sampled points, which were created during the optimization process, are appended to the `xys` array.
This is why its size changes.
Expand Down
2 changes: 1 addition & 1 deletion docs/src/Salustowicz.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ scatter(x, y, label="Sampled points", xlims=(lower_bound, upper_bound), legend=:
plot!(xs, salustowicz.(xs), label="True function", legend=:top)
```

Now, let's fit Salustowicz Function with different Surrogates:
Now, let's fit the Salustowicz function with different surrogates:

```@example salustowicz1D
InverseDistance = InverseDistanceSurrogate(x, y, lower_bound, upper_bound)
Expand Down
4 changes: 2 additions & 2 deletions docs/src/abstractgps.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
# Gaussian Process Surrogate Tutorial

!!! note
This surrogate requires the 'SurrogatesAbstractGPs' module which can be added by inputting "]add SurrogatesAbstractGPs" from the Julia command line.
This surrogate requires the 'SurrogatesAbstractGPs' module, which can be added by inputting "]add SurrogatesAbstractGPs" from the Julia command line.

Gaussian Process regression in Surrogates.jl is implemented as a simple wrapper around the [AbstractGPs.jl](https://github.com/JuliaGaussianProcesses/AbstractGPs.jl) package. AbstractGPs comes with a variety of covariance functions (kernels). See [KernelFunctions.jl](https://github.com/JuliaGaussianProcesses/KernelFunctions.jl/) for examples.

!!! tip
The examples below demonstrate the use of AbstractGPs with out-of-the-box settings without hyperparameter optimization (i.e. without changing parameters like lengthscale, signal variance and noise variance.) Beyond hyperparameter optimization, careful initialization of hyperparameters and priors on the parameters is required for this surrogate to work properly. For more details on how to fit GPs in practice, check out [A Practical Guide to Gaussian Processes](https://infallible-thompson-49de36.netlify.app/).
The examples below demonstrate the use of AbstractGPs with out-of-the-box settings without hyperparameter optimization (i.e. without changing parameters like lengthscale, signal variance, and noise variance). Beyond hyperparameter optimization, careful initialization of hyperparameters and priors on the parameters is required for this surrogate to work properly. For more details on how to fit GPs in practice, check out [A Practical Guide to Gaussian Processes](https://infallible-thompson-49de36.netlify.app/).

Also see this [example](https://juliagaussianprocesses.github.io/AbstractGPs.jl/stable/examples/1-mauna-loa/#Hyperparameter-Optimization) to understand hyperparameter optimization with AbstractGPs.
## 1D Example
Expand Down
2 changes: 1 addition & 1 deletion docs/src/ackley.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,4 +64,4 @@ plot!(xs, ackley.(xs), label="True function", legend=:top)
plot!(xs, my_rad.(xs), label="Radial basis optimized", legend=:top)
```

The DYCORS methods successfully finds the minimum.
The DYCORS method successfully finds the minimum.
2 changes: 1 addition & 1 deletion docs/src/cantilever.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ plot(p1, p2, title="True function")
```


Fitting different Surrogates:
Fitting different surrogates:
```@example beam
mypoly = PolynomialChaosSurrogate(xys, zs, lb, ub)
loba = LobachevskySurrogate(xys, zs, lb, ub)
Expand Down
12 changes: 6 additions & 6 deletions docs/src/gek.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
## Gradient Enhanced Kriging

Gradient-enhanced Kriging is an extension of kriging which supports gradient information. GEK is usually more accurate than kriging, however, it is not computationally efficient when the number of inputs, the number of sampling points, or both, are high. This is mainly due to the size of the corresponding correlation matrix that increases proportionally with both the number of inputs and the number of sampling points.
Gradient-enhanced Kriging is an extension of kriging which supports gradient information. GEK is usually more accurate than kriging. However, it is not computationally efficient when the number of inputs, the number of sampling points, or both, are high. This is mainly due to the size of the corresponding correlation matrix, which increases proportionally with both the number of inputs and the number of sampling points.

Let's have a look at the following function to use Gradient Enhanced Surrogate:
``f(x) = sin(x) + 2*x^2``
Expand All @@ -15,7 +15,7 @@ default()

### Sampling

We choose to sample f in 8 points between 0 to 1 using the `sample` function. The sampling points are chosen using a Sobol sequence, this can be done by passing `SobolSample()` to the `sample` function.
We choose to sample f in 8 points between 0 and 1 using the `sample` function. The sampling points are chosen using a Sobol sequence, this can be done by passing `SobolSample()` to the `sample` function.

```@example GEK1D
n_samples = 10
Expand All @@ -34,7 +34,7 @@ plot!(f, label="True function", xlims=(lower_bound, upper_bound), legend=:top)

### Building a surrogate

With our sampled points we can build the Gradient Enhanced Kriging surrogate using the `GEK` function.
With our sampled points, we can build the Gradient Enhanced Kriging surrogate using the `GEK` function.

```@example GEK1D
Expand All @@ -47,7 +47,7 @@ plot!(my_gek, label="Surrogate function", ribbon=p->std_error_at_point(my_gek, p

## Gradient Enhanced Kriging Surrogate Tutorial (ND)

First of all let's define the function we are going to build a surrogate for.
First of all, let's define the function we are going to build a surrogate for.

```@example GEK_ND
using Plots # hide
Expand All @@ -69,7 +69,7 @@ end

### Sampling

Let's define our bounds, this time we are working in two dimensions. In particular we want our first dimension `x` to have bounds `0, 10`, and `0, 10` for the second dimension. We are taking 80 samples of the space using Sobol Sequences. We then evaluate our function on all of the sampling points.
Let's define our bounds, this time we are working in two dimensions. In particular, we want our first dimension `x` to have bounds `0, 10`, and `0, 10` for the second dimension. We are taking 80 samples of the space using Sobol Sequences. We then evaluate our function on all the sampling points.

```@example GEK_ND
n_samples = 45
Expand All @@ -91,7 +91,7 @@ plot(p1, p2, title="True function") # hide
```

### Building a surrogate
Using the sampled points we build the surrogate, the steps are analogous to the 1-dimensional case.
Using the sampled points, we build the surrogate, the steps are analogous to the 1-dimensional case.

```@example GEK_ND
grad1 = x1 -> 2*(300*(x[1])^5 - 300*(x[1])^2*x[2] + x[1] -1)
Expand Down
2 changes: 1 addition & 1 deletion docs/src/gekpls.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
## GEKPLS Surrogate Tutorial

Gradient Enhanced Kriging with Partial Least Squares Method (GEKPLS) is a surrogate modelling technique that brings down computation time and returns improved accuracy for high-dimensional problems. The Julia implementation of GEKPLS is adapted from the Python version by [SMT](https://github.com/SMTorg) which is based on this [paper](https://arxiv.org/pdf/1708.02663.pdf).
Gradient Enhanced Kriging with Partial Least Squares Method (GEKPLS) is a surrogate modeling technique that brings down computation time and returns improved accuracy for high-dimensional problems. The Julia implementation of GEKPLS is adapted from the Python version by [SMT](https://github.com/SMTorg) which is based on this [paper](https://arxiv.org/pdf/1708.02663.pdf).

The following are the inputs when building a GEKPLS surrogate:

Expand Down
6 changes: 3 additions & 3 deletions docs/src/gramacylee.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
## Gramacy & Lee Function

Gramacy & Lee Function is a continuous function. It is not convex. The function is defined on 1-dimensional space. It is an unimodal. The function can be defined on any input domain but it is usually evaluated on
the Gramacy & Lee function is a continuous function. It is not convex. The function is defined on a 1-dimensional space. It is unimodal. The function can be defined on any input domain, but it is usually evaluated on
``x \in [-0.5, 2.5]``.

The Gramacy & Lee is as follows:
Expand All @@ -25,7 +25,7 @@ function gramacylee(x)
end
```

Let's sample f in 25 points between -0.5 and 2.5 using the `sample` function. The sampling points are chosen using a Sobol Sample, this can be done by passing `SobolSample()` to the `sample` function.
Let's sample f in 25 points between -0.5 and 2.5 using the `sample` function. The sampling points are chosen using a Sobol sample, this can be done by passing `SobolSample()` to the `sample` function.

```@example gramacylee1D
n = 25
Expand All @@ -38,7 +38,7 @@ scatter(x, y, label="Sampled points", xlims=(lower_bound, upper_bound), ylims=(-
plot!(xs, gramacylee.(xs), label="True function", legend=:top)
```

Now, let's fit Gramacy & Lee Function with different Surrogates:
Now, let's fit Gramacy & Lee function with different surrogates:

```@example gramacylee1D
my_pol = PolynomialChaosSurrogate(x, y, lower_bound, upper_bound)
Expand Down
4 changes: 2 additions & 2 deletions docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ The construction of a surrogate model can be seen as a three-step process:
2. Construction of the surrogate model
3. Surrogate optimization

The sampling methods are super important for the behavior of the Surrogate. Sampling can be done through [QuasiMonteCarlo.jl](https://github.com/SciML/QuasiMonteCarlo.jl), all the functions available there can be used in Surrogates.jl.
The sampling methods are super important for the behavior of the surrogate. Sampling can be done through [QuasiMonteCarlo.jl](https://github.com/SciML/QuasiMonteCarlo.jl), all the functions available there can be used in Surrogates.jl.

The available surrogates are:

Expand All @@ -27,7 +27,7 @@ That is, simultaneously looking for a minimum **and** sampling the most unknown
The available optimization methods are:

- Stochastic RBF (SRBF)
- Lower confidence bound strategy (LCBS)
- Lower confidence-bound strategy (LCBS)
- Expected improvement (EI)
- Dynamic coordinate search (DYCORS)

Expand Down
Loading

0 comments on commit c23f174

Please sign in to comment.