Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallel Surrogate Models Through Ask-tell Interface #435

Merged
merged 16 commits into from
Sep 22, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/pages.jl
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ pages = ["index.md"
"Gradient Enhanced Kriging" => "gek.md",
"GEKPLS" => "gekpls.md",
"MOE" => "moe.md",
"Parallel Optimization" => "parallel.md"
]
"User guide" => [
"Samples" => "samples.md",
Expand Down
56 changes: 56 additions & 0 deletions docs/src/parallel.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
# Parallel Optimization
dreycenfoiles marked this conversation as resolved.
Show resolved Hide resolved

There are some situations where it can be beneficial to run multiple optimizations in parallel. For example, if your objective function is very expensive to evaluate, you may want to run multiple evaluations in parallel.

## Ask-Tell Interface

To enable parallel optimization, we make use of an Ask-Tell interface. The user will construct the initial surrogate model the same way as for non-parallel surrogate models, but instead of using `surrogate_optimize`, the user will use `potential_optimal_points`. This will return the coordinates of points that the optimizer has determined are most useful to evaluate next. How the user evaluates these points is up to them. The Ask-Tell interface requires more manual control than `surrogate_optimize`, but it allows for more flexibility. After the point has been evaluated, the user will *tell* the surrogate model the new points with the `add_point!` function.

## Virtual Points

To ensure that points of interest returned by `potential_optimal_points` are sufficiently far from each other, the function makes use of *virtual points*. They are used as follows:
1. `potential_optimal_points` is told to return `n` points.
2. The point with the highest merit function value is selected.
3. This point is now treated as a virtual point and is assigned a temporary value that changes the landscape of the merit function. How the the temporary value is chosen depends on the strategy used. (see below)
4. The point with the new highest merit is selected.
5. The process is repeated until `n` points have been selected.

The following strategies are available for virtual point selection for all optimization algorithms:

- "Minimum Constant Liar (CLmin)":
- The virtual point is assigned using the lowest known value of the merit function across all evaluated points.
- "Mean Constant Liar (CLmean)":
- The virtual point is assigned using the mean of the merit function across all evaluated points.
- "Maximum Constant Liar (CLmax)":
- The virtual point is assigned using the great known value of the merit function across all evaluated points.

For Kriging surrogates, specifically, the above and follow strategies are available:

- "Kriging Believer (KB)":
- The virtual point is assigned using the mean of the Kriging surrogate at the virtual point.
- "Kriging Believer Upper Bound (KBUB)":
- The virtual point is assigned using 3$\sigma$ above the mean of the Kriging surrogate at the virtual point.
- "Kriging Believer Lower Bound (KBLB)":
- The virtual point is assigned using 3$\sigma$ below the mean of the Kriging surrogate at the virtual point.


In general, CLmin and KBLB tend to favor exploitation while CLmax and KBUB tend to favor exploration. CLmean and KB tend to be a compromise between the two.

## Examples

```@example
using Surrogates

lb = 0.0
ub = 10.0
f = x -> log(x) * exp(x)
x = sample(5, lb, ub, SobolSample())
y = f.(x)

my_k = Kriging(x, y, lb, ub)

for _ in 1:10
new_x, eis = potential_optimal_points(EI(), lb, ub, my_k, SobolSample(), 3, CLmean!)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
new_x, eis = potential_optimal_points(EI(), lb, ub, my_k, SobolSample(), 3, CLmean!)
new_x, eis = potential_optimal_points(EI(), lb, ub, my_k, SobolSample(), 3, MeanConstantLiar())

?

add_point!(my_k, new_x, f.(new_x))
end
```
283 changes: 283 additions & 0 deletions src/Optimization.jl
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,14 @@
using Zygote

abstract type SurrogateOptimizationAlgorithm end
abstract type ParallelStrategy end

struct KrigingBeliever <: ParallelStrategy end
struct KrigingBelieverUpperBound <: ParallelStrategy end
struct KrigingBelieverLowerBound <: ParallelStrategy end
struct MinimumConstantLiar <: ParallelStrategy end
struct MaximumConstantLiar <: ParallelStrategy end
struct MeanConstantLiar <: ParallelStrategy end

#single objective optimization
struct SRBF <: SurrogateOptimizationAlgorithm end
Expand Down Expand Up @@ -375,6 +383,214 @@
end
end

# Ask SRBF ND
function potential_optimal_points(::SRBF, strategy, lb, ub, surr::AbstractSurrogate, sample_type::SamplingAlgorithm, n_parallel;

Check warning on line 387 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L387

Added line #L387 was not covered by tests
num_new_samples = 500)

scale = 0.2
w_range = [0.3, 0.5, 0.7, 0.95]
w_cycle = Iterators.cycle(w_range)

Check warning on line 392 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L390-L392

Added lines #L390 - L392 were not covered by tests

w, state = iterate(w_cycle)

Check warning on line 394 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L394

Added line #L394 was not covered by tests

#Vector containing size in each direction
box_size = lb - ub
dtol = 1e-3 * norm(ub - lb)
d = length(surr.x)
incumbent_x = surr.x[argmin(surr.y)]

Check warning on line 400 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L397-L400

Added lines #L397 - L400 were not covered by tests

new_lb = incumbent_x .- 3 * scale * norm(incumbent_x .- lb)
new_ub = incumbent_x .+ 3 * scale * norm(incumbent_x .- ub)

Check warning on line 403 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L402-L403

Added lines #L402 - L403 were not covered by tests

@inbounds for i in 1:length(new_lb)
if new_lb[i] < lb[i]
new_lb = collect(new_lb)
new_lb[i] = lb[i]

Check warning on line 408 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L405-L408

Added lines #L405 - L408 were not covered by tests
end
if new_ub[i] > ub[i]
new_ub = collect(new_ub)
new_ub[i] = ub[i]

Check warning on line 412 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L410-L412

Added lines #L410 - L412 were not covered by tests
end
end

Check warning on line 414 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L414

Added line #L414 was not covered by tests

new_sample = sample(num_new_samples, new_lb, new_ub, sample_type)
s = zeros(eltype(surr.x[1]), num_new_samples)
for j in 1:num_new_samples
s[j] = surr(new_sample[j])
end
s_max = maximum(s)
s_min = minimum(s)

Check warning on line 422 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L416-L422

Added lines #L416 - L422 were not covered by tests

d_min = norm(box_size .+ 1)
d_max = 0.0
for r in 1:length(surr.x)
for c in 1:num_new_samples
distance_rc = norm(surr.x[r] .- new_sample[c])
if distance_rc > d_max
d_max = distance_rc

Check warning on line 430 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L424-L430

Added lines #L424 - L430 were not covered by tests
end
if distance_rc < d_min
d_min = distance_rc

Check warning on line 433 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L432-L433

Added lines #L432 - L433 were not covered by tests
end
end
end

Check warning on line 436 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L435-L436

Added lines #L435 - L436 were not covered by tests

tmp_surr = deepcopy(surr)

Check warning on line 438 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L438

Added line #L438 was not covered by tests


new_addition = 0
diff_x = zeros(eltype(surr.x[1]), d)

Check warning on line 442 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L441-L442

Added lines #L441 - L442 were not covered by tests

evaluation_of_merit_function = zeros(float(eltype(surr.x[1])), num_new_samples)
proposed_points_x = Vector{typeof(surr.x[1])}(undef, n_parallel)
merit_of_proposed_points = zeros(Float64, n_parallel)

Check warning on line 446 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L444-L446

Added lines #L444 - L446 were not covered by tests

while new_addition < n_parallel

Check warning on line 448 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L448

Added line #L448 was not covered by tests
#find minimum

@inbounds for r in eachindex(evaluation_of_merit_function)
evaluation_of_merit_function[r] = merit_function(new_sample[r], w, tmp_surr,

Check warning on line 452 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L451-L452

Added lines #L451 - L452 were not covered by tests
s_max, s_min, d_max, d_min,
box_size)
end

Check warning on line 455 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L455

Added line #L455 was not covered by tests

min_index = argmin(evaluation_of_merit_function)
new_min_x = new_sample[min_index]
min_x_merit = evaluation_of_merit_function[min_index]

Check warning on line 459 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L457-L459

Added lines #L457 - L459 were not covered by tests

for l in 1:d
diff_x[l] = norm(surr.x[l] .- new_min_x)
end
bit_x = diff_x .> dtol

Check warning on line 464 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L461-L464

Added lines #L461 - L464 were not covered by tests
#new_min_x has to have some distance from krig.x
if false in bit_x

Check warning on line 466 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L466

Added line #L466 was not covered by tests
#The new_point is not actually that new, discard it!

deleteat!(evaluation_of_merit_function, min_index)
deleteat!(new_sample, min_index)

Check warning on line 470 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L469-L470

Added lines #L469 - L470 were not covered by tests

if length(new_sample) == 0
println("Out of sampling points")
index = argmin(surr.y)
return (surr.x[index], surr.y[index])

Check warning on line 475 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L472-L475

Added lines #L472 - L475 were not covered by tests
end
else
new_addition += 1
proposed_points_x[new_addition] = new_min_x
merit_of_proposed_points[new_addition] = min_x_merit

Check warning on line 480 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L478-L480

Added lines #L478 - L480 were not covered by tests

# Update temporary surrogate using provided strategy
calculate_liars(strategy, tmp_surr, surr, new_min_x)

Check warning on line 483 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L483

Added line #L483 was not covered by tests
end

#4) Update w
w, state = iterate(w_cycle, state)
end

Check warning on line 488 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L487-L488

Added lines #L487 - L488 were not covered by tests

return (proposed_points_x, merit_of_proposed_points)

Check warning on line 490 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L490

Added line #L490 was not covered by tests
end

# Ask SRBF 1D
function potential_optimal_points(::SRBF, strategy, lb::Number, ub::Number, surr::AbstractSurrogate,

Check warning on line 494 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L494

Added line #L494 was not covered by tests
sample_type::SamplingAlgorithm, n_parallel;
num_new_samples = 500)
scale = 0.2
success = 0
w_range = [0.3, 0.5, 0.7, 0.95]
w_cycle = Iterators.cycle(w_range)

Check warning on line 500 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L497-L500

Added lines #L497 - L500 were not covered by tests

w, state = iterate(w_cycle)

Check warning on line 502 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L502

Added line #L502 was not covered by tests

box_size = lb - ub
success = 0
failures = 0
dtol = 1e-3 * norm(ub - lb)
num_of_iterations = 0

Check warning on line 508 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L504-L508

Added lines #L504 - L508 were not covered by tests

#1) Sample near incumbent (the 2 fraction is arbitrary here)
incumbent_x = surr.x[argmin(surr.y)]

Check warning on line 511 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L511

Added line #L511 was not covered by tests

new_lb = incumbent_x - scale * norm(incumbent_x - lb)
new_ub = incumbent_x + scale * norm(incumbent_x - ub)
if new_lb < lb
new_lb = lb

Check warning on line 516 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L513-L516

Added lines #L513 - L516 were not covered by tests
end
if new_ub > ub
new_ub = ub

Check warning on line 519 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L518-L519

Added lines #L518 - L519 were not covered by tests
end

new_sample = sample(num_new_samples, new_lb, new_ub, sample_type)

Check warning on line 522 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L522

Added line #L522 was not covered by tests

#2) Create merit function
s = zeros(eltype(surr.x[1]), num_new_samples)
for j in 1:num_new_samples
s[j] = surr(new_sample[j])
end
s_max = maximum(s)
s_min = minimum(s)

Check warning on line 530 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L525-L530

Added lines #L525 - L530 were not covered by tests

d_min = box_size + 1
d_max = 0.0
for r in 1:length(surr.x)
for c in 1:num_new_samples
distance_rc = norm(surr.x[r] - new_sample[c])
if distance_rc > d_max
d_max = distance_rc

Check warning on line 538 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L532-L538

Added lines #L532 - L538 were not covered by tests
end
if distance_rc < d_min
d_min = distance_rc

Check warning on line 541 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L540-L541

Added lines #L540 - L541 were not covered by tests
end
end
end

Check warning on line 544 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L543-L544

Added lines #L543 - L544 were not covered by tests

new_addition = 0
proposed_points_x = zeros(eltype(new_sample[1]), n_parallel)
merit_of_proposed_points = zeros(eltype(new_sample[1]), n_parallel)

Check warning on line 548 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L546-L548

Added lines #L546 - L548 were not covered by tests

# Temporary surrogate for virtual points
tmp_surr = deepcopy(surr)

Check warning on line 551 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L551

Added line #L551 was not covered by tests

# Loop until we have n_parallel new points
while new_addition < n_parallel

Check warning on line 554 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L554

Added line #L554 was not covered by tests

#3) Evaluate merit function at the sampled points in parallel
evaluation_of_merit_function = merit_function.(new_sample, w, tmp_surr, s_max,

Check warning on line 557 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L557

Added line #L557 was not covered by tests
s_min, d_max, d_min, box_size)

#find minimum
min_index = argmin(evaluation_of_merit_function)
new_min_x = new_sample[min_index]
min_x_merit = evaluation_of_merit_function[min_index]

Check warning on line 563 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L561-L563

Added lines #L561 - L563 were not covered by tests

diff_x = abs.(tmp_surr.x .- new_min_x)
bit_x = diff_x .> dtol

Check warning on line 566 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L565-L566

Added lines #L565 - L566 were not covered by tests
#new_min_x has to have some distance from krig.x
if false in bit_x

Check warning on line 568 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L568

Added line #L568 was not covered by tests
#The new_point is not actually that new, discard it!
deleteat!(evaluation_of_merit_function, min_index)
deleteat!(new_sample, min_index)
if length(new_sample) == 0
println("Out of sampling points")
return (proposed_points_x[1:new_addition],

Check warning on line 574 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L570-L574

Added lines #L570 - L574 were not covered by tests
merit_of_proposed_points[1:new_addition])
end
else
new_addition += 1
proposed_points_x[new_addition] = new_min_x
merit_of_proposed_points[new_addition] = min_x_merit

Check warning on line 580 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L578-L580

Added lines #L578 - L580 were not covered by tests

# Update temporary surrogate using provided strategy
calculate_liars(strategy, tmp_surr, surr, new_min_x)

Check warning on line 583 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L583

Added line #L583 was not covered by tests
end

#4) Update w
w, state = iterate(w_cycle, state)
end

Check warning on line 588 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L587-L588

Added lines #L587 - L588 were not covered by tests

return (proposed_points_x, merit_of_proposed_points)

Check warning on line 590 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L590

Added line #L590 was not covered by tests
end


"""
This is an implementation of Lower Confidence Bound (LCB),
a popular acquisition function in Bayesian optimization.
Expand Down Expand Up @@ -569,6 +785,73 @@
println("Completed maximum number of iterations")
end

# Ask EI 1D & ND
function potential_optimal_points(::EI, strategy, lb, ub, krig, sample_type::SamplingAlgorithm, n_parallel::Number;

Check warning on line 789 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L789

Added line #L789 was not covered by tests
num_new_samples = 100)

lb = krig.lb
ub = krig.ub

Check warning on line 793 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L792-L793

Added lines #L792 - L793 were not covered by tests

dtol = 1e-3 * norm(ub - lb)
eps = 0.01

Check warning on line 796 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L795-L796

Added lines #L795 - L796 were not covered by tests

tmp_krig = deepcopy(krig) # Temporary copy of the kriging model to store virtual points

Check warning on line 798 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L798

Added line #L798 was not covered by tests
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be pretty heavy. Did you check it did not cause a significant performance issue?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I did some tests and it seems like in all cases, deepcopy will never be anywhere close to your bottleneck. The bottleneck is always either evaluating your objective function or calculating the expected improvement or merit function.


new_x_max = Vector{typeof(tmp_krig.x[1])}(undef, n_parallel) # New x point
new_EI_max = zeros(eltype(tmp_krig.x[1]), n_parallel) # EI at new x point

Check warning on line 801 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L800-L801

Added lines #L800 - L801 were not covered by tests

for i in 1:n_parallel

Check warning on line 803 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L803

Added line #L803 was not covered by tests
# Sample lots of points from the design space -- we will evaluate the EI function at these points
new_sample = sample(num_new_samples, lb, ub, sample_type)

Check warning on line 805 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L805

Added line #L805 was not covered by tests

# Find the best point so far
f_min = minimum(tmp_krig.y)

Check warning on line 808 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L808

Added line #L808 was not covered by tests

# Allocate some arrays
evaluations = zeros(eltype(tmp_krig.x[1]), num_new_samples) # Holds EI function evaluations
point_found = false # Whether we have found a new point to test
while point_found == false

Check warning on line 813 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L811-L813

Added lines #L811 - L813 were not covered by tests
# For each point in the sample set, evaluate the Expected Improvement function
for j in eachindex(new_sample)
std = std_error_at_point(tmp_krig, new_sample[j])
u = tmp_krig(new_sample[j])
if abs(std) > 1e-6
z = (f_min - u - eps) / std

Check warning on line 819 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L815-L819

Added lines #L815 - L819 were not covered by tests
else
z = 0

Check warning on line 821 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L821

Added line #L821 was not covered by tests
end
# Evaluate EI at point new_sample[j]
evaluations[j] = (f_min - u - eps) * cdf(Normal(), z) +

Check warning on line 824 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L824

Added line #L824 was not covered by tests
std * pdf(Normal(), z)
end

Check warning on line 826 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L826

Added line #L826 was not covered by tests
# find the sample which maximizes the EI function
index_max = argmax(evaluations)
x_new = new_sample[index_max] # x point which maximized EI
y_new = maximum(evaluations) # EI at the new point
diff_x = [norm(prev_point .- x_new) for prev_point in tmp_krig.x]
bit_x = [diff_x_point .> dtol for diff_x_point in diff_x]

Check warning on line 832 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L828-L832

Added lines #L828 - L832 were not covered by tests
#new_min_x has to have some distance from tmp_krig.x
if false in bit_x

Check warning on line 834 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L834

Added line #L834 was not covered by tests
#The new_point is not actually that new, discard it!
deleteat!(evaluations, index_max)
deleteat!(new_sample, index_max)
if length(new_sample) == 0
println("Out of sampling points")
index = argmin(tmp_krig.y)
return (tmp_krig.x[index], tmp_krig.y[index])

Check warning on line 841 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L836-L841

Added lines #L836 - L841 were not covered by tests
end
else
point_found = true
new_x_max[i] = x_new
new_EI_max[i] = y_new
calculate_liars(strategy, tmp_krig, krig, x_new)

Check warning on line 847 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L844-L847

Added lines #L844 - L847 were not covered by tests
end
end
end

Check warning on line 850 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L849-L850

Added lines #L849 - L850 were not covered by tests

return (new_x_max, new_EI_max)

Check warning on line 852 in src/Optimization.jl

View check run for this annotation

Codecov / codecov/patch

src/Optimization.jl#L852

Added line #L852 was not covered by tests
end

"""
This is an implementation of Expected Improvement (EI),
arguably the most popular acquisition function in Bayesian optimization.
Expand Down
6 changes: 6 additions & 0 deletions src/Surrogates.jl
100644 → 100755
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ include("VariableFidelity.jl")
include("Earth.jl")
include("GEK.jl")
include("GEKPLS.jl")
include("VirtualStrategy.jl")

current_surrogates = ["Kriging", "LinearSurrogate", "LobachevskySurrogate",
"NeuralSurrogate",
Expand Down Expand Up @@ -88,6 +89,11 @@ export LobachevskyStructure, NeuralStructure, RandomForestStructure,
export WendlandStructure
export AbstractSurrogate, SamplingAlgorithm
export Kriging, RadialBasis, add_point!, current_estimate, std_error_at_point
# Parallelization Strategies
export potential_optimal_points
export MinimumConstantLiar, MaximumConstantLiar, MeanConstantLiar, KrigingBeliever,
KrigingBelieverUpperBound, KrigingBelieverLowerBound

# radial basis functions
export linearRadial, cubicRadial, multiquadricRadial, thinplateRadial

Expand Down
Loading
Loading