Skip to content

Commit

Permalink
Merge branch 'master' into bb/bump_QMC
Browse files Browse the repository at this point in the history
  • Loading branch information
ChrisRackauckas authored Dec 12, 2023
2 parents 76d24ca + e22536e commit 16a520f
Show file tree
Hide file tree
Showing 16 changed files with 36 additions and 17 deletions.
3 changes: 3 additions & 0 deletions .github/dependabot.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,3 +5,6 @@ updates:
directory: "/" # Location of package manifests
schedule:
interval: "weekly"
ignore:
- dependency-name: "crate-ci/typos"
update-types: ["version-update:semver-patch"]
13 changes: 13 additions & 0 deletions .github/workflows/SpellCheck.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
name: Spell Check

on: [pull_request]

jobs:
typos-check:
name: Spell Check with Typos
runs-on: ubuntu-latest
steps:
- name: Checkout Actions Repository
uses: actions/checkout@v4
- name: Check spelling
uses: crate-ci/[email protected]
2 changes: 2 additions & 0 deletions .typos.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
[default.extend-words]
ND = "ND"
1 change: 1 addition & 0 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ GLM = "1.3"
IterativeSolvers = "0.9"
PolyChaos = "0.2"
QuasiMonteCarlo = "0.3"
Statistics = "1"
Zygote = "0.4, 0.5, 0.6"
julia = "1.9"

Expand Down
2 changes: 1 addition & 1 deletion docs/src/Salustowicz.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ The true underlying function HyGP had to approximate is the 1D Salustowicz funct

The Salustowicz benchmark function is as follows:

``f(x) = e^{(-x)} x^3 cos(x) sin(x) (cos(x) sin^2(x) - 1)``
``f(x) = e^{-x} x^3 \cos(x) \sin(x) (\cos(x) \sin^2(x) - 1)``

Let's import these two packages `Surrogates` and `Plots`:

Expand Down
6 changes: 3 additions & 3 deletions docs/src/ackley.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Ackley Function

The Ackley function is defined as:
``f(x) = -a*exp(-b\sqrt{\frac{1}{d}\sum_{i=1}^d x_i^2}) - exp(\frac{1}{d} \sum_{i=1}^d cos(cx_i)) + a + exp(1)``
``f(x) = -a*\exp(-b\sqrt{\frac{1}{d}\sum_{i=1}^d x_i^2}) - \exp(\frac{1}{d} \sum_{i=1}^d \cos(cx_i)) + a + \exp(1)``
Usually the recommended values are: ``a = 20``, ``b = 0.2`` and ``c = 2\pi``

Let's see the 1D case.
Expand All @@ -16,15 +16,15 @@ Now, let's define the `Ackley` function:

```@example ackley
function ackley(x)
a, b, c = 20.0, -0.2, 2.0*π
a, b, c = 20.0, 0.2, 2.0*π
len_recip = inv(length(x))
sum_sqrs = zero(eltype(x))
sum_cos = sum_sqrs
for i in x
sum_cos += cos(c*i)
sum_sqrs += i^2
end
return (-a * exp(b * sqrt(len_recip*sum_sqrs)) -
return (-a * exp(-b * sqrt(len_recip*sum_sqrs)) -
exp(len_recip*sum_cos) + a + 2.71)
end
```
Expand Down
2 changes: 1 addition & 1 deletion docs/src/gramacylee.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Gramacy & Lee Function is a continuous function. It is not convex. The function
``x \in [-0.5, 2.5]``.

The Gramacy & Lee is as follows:
``f(x) = \frac{sin(10\pi x)}{2x} + (x-1)^4``.
``f(x) = \frac{\sin(10\pi x)}{2x} + (x-1)^4``.

Let's import these two packages `Surrogates` and `Plots`:

Expand Down
2 changes: 1 addition & 1 deletion docs/src/optimizations.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,5 +28,5 @@ surrogate_optimize(obj::Function,sop1::SOP,lb::Number,ub::Number,surrSOP::Abstra
To add another optimization method, you just need to define a new
SurrogateOptimizationAlgorithm and write its corresponding algorithm, overloading the following:
```
surrogate_optimize(obj::Function,::NewOptimizatonType,lb,ub,surr::AbstractSurrogate,sample_type::SamplingAlgorithm;maxiters=100,num_new_samples=100)
surrogate_optimize(obj::Function,::NewOptimizationType,lb,ub,surr::AbstractSurrogate,sample_type::SamplingAlgorithm;maxiters=100,num_new_samples=100)
```
2 changes: 1 addition & 1 deletion docs/src/randomforest.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ plot!(f, label="True function", xlims=(lower_bound, upper_bound), legend=:top)

With our sampled points we can build the Random forests surrogate using the `RandomForestSurrogate` function.

`randomforest_surrogate` behaves like an ordinary function which we can simply plot. Addtionally you can specify the number of trees created
`randomforest_surrogate` behaves like an ordinary function which we can simply plot. Additionally you can specify the number of trees created
using the parameter num_round

```@example RandomForestSurrogate_tutorial
Expand Down
2 changes: 1 addition & 1 deletion docs/src/surrogate.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ It's great that you want to add another surrogate to the library!
You will need to:

1. Define a new mutable struct and a constructor function
2. Define add\_point!(your\_surrogate::AbstactSurrogate,x\_new,y\_new)
2. Define add\_point!(your\_surrogate::AbstractSurrogate,x\_new,y\_new)
3. Define your\_surrogate(value) for the approximation

**Example**
Expand Down
2 changes: 1 addition & 1 deletion docs/src/tensor_prod.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Tensor product function
The tensor product function is defined as:
``f(x) = \prod_{i=1}^d cos(a\pi x_i)``
``f(x) = \prod_{i=1}^d \cos(a\pi x_i)``

Let's import Surrogates and Plots:
```@example tensor
Expand Down
4 changes: 2 additions & 2 deletions docs/src/water_flow.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Water flow function

The water flow function is defined as:
``f(r_w,r,T_u,H_u,T_l,H_l,L,K_w) = \frac{2*\pi*T_u(H_u - H_l)}{log(\frac{r}{r_w})*[1 + \frac{2LT_u}{log(\frac{r}{r_w})*r_w^2*K_w}+ \frac{T_u}{T_l} ]}``
``f(r_w,r,T_u,H_u,T_l,H_l,L,K_w) = \frac{2*\pi*T_u(H_u - H_l)}{\log(\frac{r}{r_w})*[1 + \frac{2LT_u}{\log(\frac{r}{r_w})*r_w^2*K_w}+ \frac{T_u}{T_l} ]}``

It has 8 dimension.
It has 8 dimensions.

```@example water
using Surrogates
Expand Down
2 changes: 1 addition & 1 deletion src/GEK.jl
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ end

function GEK(x, y, lb::Number, ub::Number; p = 1.0, theta = 1.0)
if length(x) != length(unique(x))
println("There exists a repetion in the samples, cannot build Kriging.")
println("There exists a repetition in the samples, cannot build Kriging.")
return
end
mu, b, sigma, inverse_of_R = _calc_gek_coeffs(x, y, p, theta)
Expand Down
6 changes: 3 additions & 3 deletions src/GEKPLS.jl
Original file line number Diff line number Diff line change
Expand Up @@ -201,8 +201,8 @@ function _ge_compute_pls(X, y, n_comp, grads, delta_x, xlimits, extra_points)
bb_vals = bb_vals .* grads[i, :]'
_y = y[i, :] .+ sum(bb_vals, dims = 2)

#_pls.fit(_X, _y) # relic from sklearn versiom; retained for future reference.
#coeff_pls[:, :, i] = _pls.x_rotations_ #relic from sklearn versiom; retained for future reference.
#_pls.fit(_X, _y) # relic from sklearn version; retained for future reference.
#coeff_pls[:, :, i] = _pls.x_rotations_ #relic from sklearn version; retained for future reference.

coeff_pls[:, :, i] = _modified_pls(_X, _y, n_comp) #_modified_pls returns the equivalent of SKLearn's _pls.x_rotations_
if extra_points != 0
Expand Down Expand Up @@ -304,7 +304,7 @@ end
######end of bb design######

"""
We substract the mean from each variable. Then, we divide the values of each
We subtract the mean from each variable. Then, we divide the values of each
variable by its standard deviation.
Parameters
Expand Down
2 changes: 1 addition & 1 deletion src/Kriging.jl
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ Constructor for type Kriging.
function Kriging(x, y, lb::Number, ub::Number; p = 2.0,
theta = 0.5 / max(1e-6 * abs(ub - lb), std(x))^p)
if length(x) != length(unique(x))
println("There exists a repetion in the samples, cannot build Kriging.")
println("There exists a repetition in the samples, cannot build Kriging.")
return
end

Expand Down
2 changes: 1 addition & 1 deletion src/Optimization.jl
Original file line number Diff line number Diff line change
Expand Up @@ -1691,7 +1691,7 @@ function surrogate_optimize(obj::Function, sopd::SOP, lb, ub, surrSOPD::Abstract
new_points_y[i] = y_best
end

#new_points[i] is splitted in new_points_x and new_points_y now contains:
#new_points[i] is split in new_points_x and new_points_y now contains:
#[x_1,y_1; x_2,y_2,...,x_{num_new_samples},y_{num_new_samples}]

#2.4 Adaptive learning and tabu archive
Expand Down

0 comments on commit 16a520f

Please sign in to comment.