Skip to content

Commit

Permalink
docs: update docs to use HaltonSample
Browse files Browse the repository at this point in the history
  • Loading branch information
sathvikbhagavan committed Dec 13, 2023
1 parent e27e5e2 commit 9826638
Show file tree
Hide file tree
Showing 6 changed files with 15 additions and 17 deletions.
4 changes: 2 additions & 2 deletions docs/src/InverseDistance.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,15 +15,15 @@ default()

### Sampling

We choose to sample f in 25 points between 0 and 10 using the `sample` function. The sampling points are chosen using a Low Discrepancy, this can be done by passing `LowDiscrepancySample()` to the `sample` function.
We choose to sample f in 25 points between 0 and 10 using the `sample` function. The sampling points are chosen using a Low Discrepancy, this can be done by passing `HaltonSample()` to the `sample` function.

```@example Inverse_Distance1D
f(x) = sin(x) + sin(x)^2 + sin(x)^3
n_samples = 25
lower_bound = 0.0
upper_bound = 10.0
x = sample(n_samples, lower_bound, upper_bound, LowDiscrepancySample(;base=2))
x = sample(n_samples, lower_bound, upper_bound, HaltonSample())
y = f.(x)
scatter(x, y, label="Sampled points", xlims=(lower_bound, upper_bound), legend=:top)
Expand Down
3 changes: 1 addition & 2 deletions docs/src/moe.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ end
lb = [-1.0, -1.0]
ub = [1.0, 1.0]
n = 150
x = sample(n, lb, ub, SobolSample())
x = sample(n, lb, ub, RandomSample())
y = discont_NDIM.(x)
x_test = sample(10, lb, ub, GoldenSample())
Expand All @@ -110,7 +110,6 @@ rbf = RadialBasis(x, y, lb, ub)
rbf_pred_vals = rbf.(x_test)
rbf_rmse = rmse(true_vals, rbf_pred_vals)
println(rbf_rmse > moe_rmse)
```

### Usage Notes - Example With Other Surrogates
Expand Down
16 changes: 8 additions & 8 deletions docs/src/parallel.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,24 +17,24 @@ To ensure that points of interest returned by `potential_optimal_points` are suf

The following strategies are available for virtual point selection for all optimization algorithms:

- "Minimum Constant Liar (CLmin)":
- "Minimum Constant Liar (MinimumConstantLiar)":
- The virtual point is assigned using the lowest known value of the merit function across all evaluated points.
- "Mean Constant Liar (CLmean)":
- "Mean Constant Liar (MeanConstantLiar)":
- The virtual point is assigned using the mean of the merit function across all evaluated points.
- "Maximum Constant Liar (CLmax)":
- "Maximum Constant Liar (MaximumConstantLiar)":
- The virtual point is assigned using the great known value of the merit function across all evaluated points.

For Kriging surrogates, specifically, the above and follow strategies are available:

- "Kriging Believer (KB)":
- "Kriging Believer (KrigingBeliever):
- The virtual point is assigned using the mean of the Kriging surrogate at the virtual point.
- "Kriging Believer Upper Bound (KBUB)":
- "Kriging Believer Upper Bound (KrigingBelieverUpperBound)":
- The virtual point is assigned using 3$\sigma$ above the mean of the Kriging surrogate at the virtual point.
- "Kriging Believer Lower Bound (KBLB)":
- "Kriging Believer Lower Bound (KrigingBelieverLowerBound)":
- The virtual point is assigned using 3$\sigma$ below the mean of the Kriging surrogate at the virtual point.


In general, CLmin and KBLB tend to favor exploitation while CLmax and KBUB tend to favor exploration. CLmean and KB tend to be a compromise between the two.
In general, MinimumConstantLiar and KrigingBelieverLowerBound tend to favor exploitation while MaximumConstantLiar and KrigingBelieverUpperBound tend to favor exploration. MeanConstantLiar and KrigingBeliever tend to be a compromise between the two.

## Examples

Expand All @@ -50,7 +50,7 @@ y = f.(x)
my_k = Kriging(x, y, lb, ub)
for _ in 1:10
new_x, eis = potential_optimal_points(EI(), lb, ub, my_k, SobolSample(), 3, CLmean!)
new_x, eis = potential_optimal_points(EI(), MeanConstantLiar(), lb, ub, my_k, SobolSample(), 3)
add_point!(my_k, new_x, f.(new_x))
end
```
4 changes: 2 additions & 2 deletions docs/src/polychaos.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ we are trying to fit. Under the hood, PolyChaos.jl has been used.
It is possible to specify a type of polynomial for each dimension of the problem.
### Sampling

We choose to sample f in 25 points between 0 and 10 using the `sample` function. The sampling points are chosen using a Low Discrepancy, this can be done by passing `LowDiscrepancySample()` to the `sample` function.
We choose to sample f in 25 points between 0 and 10 using the `sample` function. The sampling points are chosen using a Low Discrepancy, this can be done by passing `HaltonSample()` to the `sample` function.

```@example polychaos
using Surrogates
Expand All @@ -20,7 +20,7 @@ default()
n = 20
lower_bound = 1.0
upper_bound = 6.0
x = sample(n,lower_bound,upper_bound,LowDiscrepancySample(2))
x = sample(n,lower_bound,upper_bound,HaltonSample())
f = x -> log(x)*x + sin(x)
y = f.(x)
scatter(x, y, label="Sampled points", xlims=(lower_bound, upper_bound), legend=:top)
Expand Down
3 changes: 1 addition & 2 deletions docs/src/samples.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,8 +32,7 @@ sample(n,lb,ub,::LatinHypercubeSample)

* Low Discrepancy sample
```
LowDiscrepancySample{T}
sample(n,lb,ub,S::LowDiscrepancySample)
sample(n,lb,ub,S::HaltonSample)
```

* Sample on section
Expand Down
2 changes: 1 addition & 1 deletion docs/src/secondorderpoly.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ f = x -> 3*sin(x) + 10/x
lb = 3.0
ub = 6.0
n = 10
x = sample(n,lb,ub,LowDiscrepancySample(2))
x = sample(n,lb,ub,HaltonSample())
y = f.(x)
scatter(x, y, label="Sampled points", xlims=(lb, ub))
plot!(f, label="True function", xlims=(lb, ub))
Expand Down

0 comments on commit 9826638

Please sign in to comment.