Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bump QMC to 0.3 and fix tests #446

Merged
merged 18 commits into from
Dec 13, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
18 commits
Select commit Hold shift + click to select a range
314439c
chore: Replace UniformSample with RandomSample
ashutosh-b-b Oct 7, 2023
5fda238
build: bump QMC and julia version
ashutosh-b-b Nov 13, 2023
1433365
fix: update `sample` method in Surrogates
ashutosh-b-b Nov 13, 2023
59903d4
feat: add `SectionSample` from QMC to Surrogates
ashutosh-b-b Nov 13, 2023
df6022e
fix: fix maxima of lb and minima of ub in `surrogate_optimise` for
ashutosh-b-b Nov 13, 2023
0296180
fix(SurrogatesFlux): remove `@epochs` since its no longer in Flux
ashutosh-b-b Nov 13, 2023
7cb5f3f
test(SurrogatesFlux): update tests
ashutosh-b-b Nov 13, 2023
5c1445e
test(SurrogatesMOE): mark tests broken
ashutosh-b-b Nov 13, 2023
187546c
test(Surrogates): increase RMSE threshold in tests of GEKPLS
ashutosh-b-b Nov 13, 2023
2e49e2c
test(Surrogates): qualify `free_dimenstions` from Surrogates instead of
ashutosh-b-b Nov 13, 2023
99863b8
test(Surrogates): replace `LowDiscrepancySample` with `HaltonSample`
ashutosh-b-b Nov 13, 2023
1063532
test(Surrogates): update tests for sampling with new QMC Api
ashutosh-b-b Nov 13, 2023
76d24ca
test: update the MOE ND tests, use 9 samples and unmark broken
sathvikbhagavan Dec 12, 2023
16a520f
Merge branch 'master' into bb/bump_QMC
ChrisRackauckas Dec 12, 2023
db9342c
test: increase one extra point in GEKPLS test 11
sathvikbhagavan Dec 12, 2023
d4531b2
ci: dev subpackages of Surrogates in doc build
sathvikbhagavan Dec 13, 2023
e27e5e2
refactor: export HaltonSample instead of LowDiscrepancySample
sathvikbhagavan Dec 13, 2023
9826638
docs: update docs to use HaltonSample
sathvikbhagavan Dec 13, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 9 additions & 1 deletion .github/workflows/Documentation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,15 @@ jobs:
with:
version: '1'
- name: Install dependencies
run: julia --project=docs/ -e 'using Pkg; Pkg.develop(PackageSpec(path=pwd())); Pkg.instantiate()'
run: julia --project=docs/ -e 'using Pkg;
Pkg.develop(PackageSpec(path=pwd()));
Pkg.develop(PackageSpec(path=joinpath(pwd(), "lib", "SurrogatesAbstractGPs")));
Pkg.develop(PackageSpec(path=joinpath(pwd(), "lib", "SurrogatesFlux")));
Pkg.develop(PackageSpec(path=joinpath(pwd(), "lib", "SurrogatesMOE")));
Pkg.develop(PackageSpec(path=joinpath(pwd(), "lib", "SurrogatesPolyChaos")));
Pkg.develop(PackageSpec(path=joinpath(pwd(), "lib", "SurrogatesRandomForest")));
Pkg.develop(PackageSpec(path=joinpath(pwd(), "lib", "SurrogatesSVM")));
Pkg.instantiate()'
- name: Build and deploy
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # For authentication with GitHub Actions token
Expand Down
4 changes: 2 additions & 2 deletions Project.toml
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,10 @@ Flux = "0.12, 0.13"
GLM = "1.3"
IterativeSolvers = "0.9"
PolyChaos = "0.2"
QuasiMonteCarlo = "=0.2.16"
QuasiMonteCarlo = "0.3"
Statistics = "1"
Zygote = "0.4, 0.5, 0.6"
julia = "1.6"
julia = "1.9"

[extras]
Cubature = "667455a9-e2ce-5579-9412-b964f529a492"
Expand Down
4 changes: 2 additions & 2 deletions docs/src/InverseDistance.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,15 +15,15 @@ default()

### Sampling

We choose to sample f in 25 points between 0 and 10 using the `sample` function. The sampling points are chosen using a Low Discrepancy, this can be done by passing `LowDiscrepancySample()` to the `sample` function.
We choose to sample f in 25 points between 0 and 10 using the `sample` function. The sampling points are chosen using a Low Discrepancy, this can be done by passing `HaltonSample()` to the `sample` function.

```@example Inverse_Distance1D
f(x) = sin(x) + sin(x)^2 + sin(x)^3

n_samples = 25
lower_bound = 0.0
upper_bound = 10.0
x = sample(n_samples, lower_bound, upper_bound, LowDiscrepancySample(;base=2))
x = sample(n_samples, lower_bound, upper_bound, HaltonSample())
y = f.(x)

scatter(x, y, label="Sampled points", xlims=(lower_bound, upper_bound), legend=:top)
Expand Down
2 changes: 1 addition & 1 deletion docs/src/ackley.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ The fit looks good. Let's now see if we are able to find the minimum value using
optimization methods:

```@example ackley
surrogate_optimize(ackley,DYCORS(),lb,ub,my_rad,UniformSample())
surrogate_optimize(ackley,DYCORS(),lb,ub,my_rad,RandomSample())
scatter(x, y, label="Sampled points", xlims=(lb, ub), ylims=(0, 30), legend=:top)
plot!(xs, ackley.(xs), label="True function", legend=:top)
plot!(xs, my_rad.(xs), label="Radial basis optimized", legend=:top)
Expand Down
2 changes: 1 addition & 1 deletion docs/src/gekpls.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ This next example demonstrates how this can be accomplished.
y = sphere_function.(x)
g = GEKPLS(x, y, grads, n_comp, delta_x, lb, ub, extra_points, initial_theta)
x_point, minima = surrogate_optimize(sphere_function, SRBF(), lb, ub, g,
UniformSample(); maxiters = 20,
RandomSample(); maxiters = 20,
num_new_samples = 20, needs_gradient = true)
println(minima)

Expand Down
2 changes: 1 addition & 1 deletion docs/src/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ my_lobachevsky = LobachevskySurrogate(x,y,lb,ub,alpha=alpha,n=n)
value = my_lobachevsky(5.0)

#Adding more data points
surrogate_optimize(f,SRBF(),lb,ub,my_lobachevsky,UniformSample())
surrogate_optimize(f,SRBF(),lb,ub,my_lobachevsky,RandomSample())

#New approximation
value = my_lobachevsky(5.0)
Expand Down
3 changes: 1 addition & 2 deletions docs/src/moe.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ end
lb = [-1.0, -1.0]
ub = [1.0, 1.0]
n = 150
x = sample(n, lb, ub, SobolSample())
x = sample(n, lb, ub, RandomSample())
y = discont_NDIM.(x)
x_test = sample(10, lb, ub, GoldenSample())

Expand All @@ -110,7 +110,6 @@ rbf = RadialBasis(x, y, lb, ub)
rbf_pred_vals = rbf.(x_test)
rbf_rmse = rmse(true_vals, rbf_pred_vals)
println(rbf_rmse > moe_rmse)

```

### Usage Notes - Example With Other Surrogates
Expand Down
16 changes: 8 additions & 8 deletions docs/src/parallel.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,24 +17,24 @@ To ensure that points of interest returned by `potential_optimal_points` are suf

The following strategies are available for virtual point selection for all optimization algorithms:

- "Minimum Constant Liar (CLmin)":
- "Minimum Constant Liar (MinimumConstantLiar)":
- The virtual point is assigned using the lowest known value of the merit function across all evaluated points.
- "Mean Constant Liar (CLmean)":
- "Mean Constant Liar (MeanConstantLiar)":
- The virtual point is assigned using the mean of the merit function across all evaluated points.
- "Maximum Constant Liar (CLmax)":
- "Maximum Constant Liar (MaximumConstantLiar)":
- The virtual point is assigned using the great known value of the merit function across all evaluated points.

For Kriging surrogates, specifically, the above and follow strategies are available:

- "Kriging Believer (KB)":
- "Kriging Believer (KrigingBeliever):
- The virtual point is assigned using the mean of the Kriging surrogate at the virtual point.
- "Kriging Believer Upper Bound (KBUB)":
- "Kriging Believer Upper Bound (KrigingBelieverUpperBound)":
- The virtual point is assigned using 3$\sigma$ above the mean of the Kriging surrogate at the virtual point.
- "Kriging Believer Lower Bound (KBLB)":
- "Kriging Believer Lower Bound (KrigingBelieverLowerBound)":
- The virtual point is assigned using 3$\sigma$ below the mean of the Kriging surrogate at the virtual point.


In general, CLmin and KBLB tend to favor exploitation while CLmax and KBUB tend to favor exploration. CLmean and KB tend to be a compromise between the two.
In general, MinimumConstantLiar and KrigingBelieverLowerBound tend to favor exploitation while MaximumConstantLiar and KrigingBelieverUpperBound tend to favor exploration. MeanConstantLiar and KrigingBeliever tend to be a compromise between the two.

## Examples

Expand All @@ -50,7 +50,7 @@ y = f.(x)
my_k = Kriging(x, y, lb, ub)

for _ in 1:10
new_x, eis = potential_optimal_points(EI(), lb, ub, my_k, SobolSample(), 3, CLmean!)
new_x, eis = potential_optimal_points(EI(), MeanConstantLiar(), lb, ub, my_k, SobolSample(), 3)
add_point!(my_k, new_x, f.(new_x))
end
```
4 changes: 2 additions & 2 deletions docs/src/polychaos.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ we are trying to fit. Under the hood, PolyChaos.jl has been used.
It is possible to specify a type of polynomial for each dimension of the problem.
### Sampling

We choose to sample f in 25 points between 0 and 10 using the `sample` function. The sampling points are chosen using a Low Discrepancy, this can be done by passing `LowDiscrepancySample()` to the `sample` function.
We choose to sample f in 25 points between 0 and 10 using the `sample` function. The sampling points are chosen using a Low Discrepancy, this can be done by passing `HaltonSample()` to the `sample` function.

```@example polychaos
using Surrogates
Expand All @@ -20,7 +20,7 @@ default()
n = 20
lower_bound = 1.0
upper_bound = 6.0
x = sample(n,lower_bound,upper_bound,LowDiscrepancySample(2))
x = sample(n,lower_bound,upper_bound,HaltonSample())
f = x -> log(x)*x + sin(x)
y = f.(x)
scatter(x, y, label="Sampled points", xlims=(lower_bound, upper_bound), legend=:top)
Expand Down
2 changes: 1 addition & 1 deletion docs/src/radials.md
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ This is why its size changes.
size(xys)
```
```@example RadialBasisSurrogateND
surrogate_optimize(booth, SRBF(), lower_bound, upper_bound, radial_basis, UniformSample(), maxiters=50)
surrogate_optimize(booth, SRBF(), lower_bound, upper_bound, radial_basis, RandomSample(), maxiters=50)
```
```@example RadialBasisSurrogateND
size(xys)
Expand Down
5 changes: 2 additions & 3 deletions docs/src/samples.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ sample(n,lb,ub,S::GridSample)

* Uniform sample
```
sample(n,lb,ub,::UniformSample)
sample(n,lb,ub,::RandomSample)
```

* Sobol sample
Expand All @@ -32,8 +32,7 @@ sample(n,lb,ub,::LatinHypercubeSample)

* Low Discrepancy sample
```
LowDiscrepancySample{T}
sample(n,lb,ub,S::LowDiscrepancySample)
sample(n,lb,ub,S::HaltonSample)
```

* Sample on section
Expand Down
2 changes: 1 addition & 1 deletion docs/src/secondorderpoly.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ f = x -> 3*sin(x) + 10/x
lb = 3.0
ub = 6.0
n = 10
x = sample(n,lb,ub,LowDiscrepancySample(2))
x = sample(n,lb,ub,HaltonSample())
y = f.(x)
scatter(x, y, label="Sampled points", xlims=(lb, ub))
plot!(f, label="True function", xlims=(lb, ub))
Expand Down
4 changes: 2 additions & 2 deletions docs/src/tutorials.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ using Surrogates
f = x -> exp(x)*x^2+x^3
lb = 0.0
ub = 10.0
x = sample(50,lb,ub,UniformSample())
x = sample(50,lb,ub,RandomSample())
y = f.(x)
p = 1.9
my_krig = Kriging(x,y,lb,ub,p=p)
Expand All @@ -58,7 +58,7 @@ std_err = std_error_at_point(my_krig,5.4)
Let's now optimize the Kriging surrogate using Lower confidence bound method, this is just a one-liner:

```@example kriging
surrogate_optimize(f,LCBS(),lb,ub,my_krig,UniformSample(); maxiters = 10, num_new_samples = 10)
surrogate_optimize(f,LCBS(),lb,ub,my_krig,RandomSample(); maxiters = 10, num_new_samples = 10)
```

Surrogate optimization methods have two purposes: they both sample the space in unknown regions and look for the minima at the same time.
Expand Down
4 changes: 2 additions & 2 deletions lib/SurrogatesAbstractGPs/test/runtests.jl
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ using Surrogates: sample, SobolSample
a = 2
b = 6
my_k_EI1 = AbstractGPSurrogate(x, y)
surrogate_optimize(objective_function, EI(), a, b, my_k_EI1, UniformSample(),
surrogate_optimize(objective_function, EI(), a, b, my_k_EI1, RandomSample(),
maxiters = 200, num_new_samples = 155)
end

Expand All @@ -99,7 +99,7 @@ using Surrogates: sample, SobolSample
lb = [1.0, 1.0]
ub = [6.0, 6.0]
my_k_E1N = AbstractGPSurrogate(x, y)
surrogate_optimize(objective_function_ND, EI(), lb, ub, my_k_E1N, UniformSample())
surrogate_optimize(objective_function_ND, EI(), lb, ub, my_k_E1N, RandomSample())
end

@testset "check working of logpdf_surrogate 1D" begin
Expand Down
9 changes: 6 additions & 3 deletions lib/SurrogatesFlux/src/SurrogatesFlux.jl
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ import Surrogates: add_point!, AbstractSurrogate, _check_dimension
export NeuralSurrogate

using Flux
using Flux: @epochs

mutable struct NeuralSurrogate{X, Y, M, L, O, P, N, A, U} <: AbstractSurrogate
x::X
Expand Down Expand Up @@ -32,7 +31,9 @@ function NeuralSurrogate(x, y, lb, ub; model = Chain(Dense(length(x[1]), 1), fir
X = vec.(collect.(x))
data = zip(X, y)
ps = Flux.params(model)
@epochs n_echos Flux.train!(loss, ps, data, opt)
for epoch in 1:n_echos
Flux.train!(loss, ps, data, opt)
end
return NeuralSurrogate(x, y, model, loss, opt, ps, n_echos, lb, ub)
end

Expand All @@ -58,7 +59,9 @@ function add_point!(my_n::NeuralSurrogate, x_new, y_new)
end
X = vec.(collect.(my_n.x))
data = zip(X, my_n.y)
@epochs my_n.n_echos Flux.train!(my_n.loss, my_n.ps, data, my_n.opt)
for epoch in 1:my_n.n_echos
Flux.train!(my_n.loss, my_n.ps, data, my_n.opt)
end
nothing
end

Expand Down
1 change: 0 additions & 1 deletion lib/SurrogatesFlux/test/runtests.jl
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ using SafeTestsets
using Surrogates
using Surrogates: SobolSample
using Flux
using Flux: @epochs
using SurrogatesFlux
using LinearAlgebra
using Zygote
Expand Down
6 changes: 3 additions & 3 deletions lib/SurrogatesMOE/test/runtests.jl
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ end
n = 150
x = sample(n, lb, ub, SobolSample())
y = discont_NDIM.(x)
x_test = sample(10, lb, ub, GoldenSample())
x_test = sample(9, lb, ub, GoldenSample())

expert_types = [
KrigingStructure(p = [1.0, 1.0], theta = [1.0, 1.0]),
Expand Down Expand Up @@ -116,7 +116,7 @@ end
lb = [-1.0, -1.0]
ub = [1.0, 1.0]
n = 120
x = sample(n, lb, ub, UniformSample())
x = sample(n, lb, ub, RandomSample())
y = discont_NDIM.(x)
x_test = sample(10, lb, ub, GoldenSample())

Expand Down Expand Up @@ -184,7 +184,7 @@ end
lb = [-1.0, -1.0]
ub = [1.0, 1.0]
n = 110
x = sample(n, lb, ub, UniformSample())
x = sample(n, lb, ub, RandomSample())
y = discont_NDIM.(x)
expert_types = [InverseDistanceStructure(p = 1.0),
RadialBasisStructure(radial_function = linearRadial(), scale_factor = 1.0,
Expand Down
2 changes: 1 addition & 1 deletion lib/SurrogatesPolyChaos/test/runtests.jl
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ using SafeTestsets
lb = [0.0, 0.0]
ub = [10.0, 10.0]
obj_ND = x -> log(x[1]) * exp(x[2])
x = sample(40, lb, ub, UniformSample())
x = sample(40, lb, ub, RandomSample())
y = obj_ND.(x)
my_polyND = PolynomialChaosSurrogate(x, y, lb, ub)
surrogate_optimize(obj_ND, SRBF(), lb, ub, my_polyND, SobolSample(), maxiters = 15)
Expand Down
2 changes: 1 addition & 1 deletion lib/SurrogatesSVM/test/runtests.jl
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ using SafeTestsets
obj_N = x -> x[1]^2 * x[2]
lb = [0.0, 0.0]
ub = [10.0, 10.0]
x = sample(100, lb, ub, UniformSample())
x = sample(100, lb, ub, RandomSample())
y = obj_N.(x)
my_svm_ND = SVMSurrogate(x, y, lb, ub)
val = my_svm_ND((5.0, 1.2))
Expand Down
16 changes: 3 additions & 13 deletions src/Optimization.jl
Original file line number Diff line number Diff line change
Expand Up @@ -110,18 +110,8 @@ function surrogate_optimize(obj::Function, ::SRBF, lb, ub, surr::AbstractSurroga

new_lb = incumbent_x .- 3 * scale * norm(incumbent_x .- lb)
new_ub = incumbent_x .+ 3 * scale * norm(incumbent_x .- ub)

@inbounds for i in 1:length(new_lb)
if new_lb[i] < lb[i]
new_lb = collect(new_lb)
new_lb[i] = lb[i]
end
if new_ub[i] > ub[i]
new_ub = collect(new_ub)
new_ub[i] = ub[i]
end
end

new_lb = vec(max.(new_lb, lb))
new_ub = vec(min.(new_ub, ub))
new_sample = sample(num_new_samples, new_lb, new_ub, sample_type)
s = zeros(eltype(surr.x[1]), num_new_samples)
for j in 1:num_new_samples
Expand Down Expand Up @@ -2126,7 +2116,7 @@ end

function section_sampler_returner(sample_type::SectionSample, surrn_x, surrn_y,
lb, ub, surrn)
d_fixed = QuasiMonteCarlo.fixed_dimensions(sample_type)
d_fixed = fixed_dimensions(sample_type)
@assert length(surrn_y) == size(surrn_x)[1]
surrn_xy = [(surrn_x[y], surrn_y[y]) for y in 1:length(surrn_y)]
section_surr1_xy = filter(xyz -> xyz[1][d_fixed] == Tuple(sample_type.x0[d_fixed]),
Expand Down
Loading
Loading