Skip to content

Commit

Permalink
Update tensor_prod.md
Browse files Browse the repository at this point in the history
  • Loading branch information
Spinachboul authored Jan 10, 2024
1 parent 13940fc commit e75b13a
Showing 1 changed file with 91 additions and 25 deletions.
116 changes: 91 additions & 25 deletions docs/src/tensor_prod.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,39 +2,105 @@
The tensor product function is defined as:
``\[ f(x) = ∏ᵢ=₁ᵈ cos(aπxᵢ) \]``

Let's import Surrogates and Plots:
```@example tensor
using Surrogates
using Plots
default()
Where\
d: Respresents the dimensionality of the input vector x\

Check warning on line 6 in docs/src/tensor_prod.md

View workflow job for this annotation

GitHub Actions / Spell Check with Typos

"Respresents" should be "Represents".
xi: Represents the ith components of the input vector\
a: A constant parameter

# Generating Data and Plotting

```
function tensor_product_function(x, a)
return prod(cos.(a * π * xi) for xi in x)
end
Define the 1D objective function:
```@example tensor
function f(x)
a = 0.5;
return cos(a*pi*x)
# Generate training and test data
function generate_data(n, lb, ub, a)
x_train = sample(n, lb, ub, SobolSample())
y_train = tensor_product_function(x_train, a)
x_test = sample(1000, lb, ub, SobolSample()) # Generating test data
y_test = tensor_product_function(x_test, a) # Generating test labels
return x_train, y_train, x_test, y_test
end
```
```@example tensor
# Visualize training data and the true function
function plot_data_and_true_function(x_train, y_train, x_test, y_test, a, lb, ub)
xs = range(lb, ub, length=1000)
scatter(x_train, y_train, label="Training points", xlims=(lb, ub), ylims=(-1, 1), legend=:top)
plot!(xs, tensor_product_function.(Ref(xs), a), label="True function", legend=:top)
scatter!(x_test, y_test, label="Test points")
end
# Generate data and plot
n = 30
lb = -5.0
ub = 5.0
a = 0.5
x = sample(n, lb, ub, SobolSample())
y = f.(x)
xs = lb:0.001:ub
scatter(x, y, label="Sampled points", xlims=(lb, ub), ylims=(-1, 1), legend=:top)
plot!(xs, f.(xs), label="True function", legend=:top)
x_train, y_train, x_test, y_test = generate_data(n, lb, ub, a)
plot_data_and_true_function(x_train, y_train, x_test, y_test, a, lb, ub)
```

# Training various Surrogates
Now let's train various surrogate models and evaluate their performance on the test data

```
# Train different surrogate models
function train_surrogates(x_train, y_train)
loba = LobachevskySurrogate(x_train, y_train)
krig = Kriging(x_train, y_train)
return loba, krig
end
# Evaluate and compare surrogate model performances
function evaluate_surrogates(loba, krig, x_test)
loba_pred = loba(x_test)
krig_pred = krig(x_test)
return loba_pred, krig_pred
end
# Plot surrogate predictions against the true function
function plot_surrogate_predictions(loba_pred, krig_pred, y_test, a, lb, ub)
xs = range(lb, ub, length=1000)
plot(xs, tensor_product_function.(Ref(xs), a), label="True function", legend=:top)
plot!(xs, loba_pred, label="Lobachevsky")
plot!(xs, krig_pred, label="Kriging")
end
# Train surrogates and evaluate their performance
loba, krig = train_surrogates(x_train, y_train)
loba_pred, krig_pred = evaluate_surrogates(loba, krig, x_test)
# Plot surrogate predictions against the true function
plot_surrogate_predictions(loba_pred, krig_pred, y_test, a, lb, ub)
```

Fitting and plotting different surrogates:
```@example tensor
loba_1 = LobachevskySurrogate(x, y, lb, ub)
krig = Kriging(x, y, lb, ub)
scatter(x, y, label="Sampled points", xlims=(lb, ub), ylims=(-2.5, 2.5), legend=:bottom)
plot!(xs,f.(xs), label="True function", legend=:top)
plot!(xs, loba_1.(xs), label="Lobachevsky", legend=:top)
plot!(xs, krig.(xs), label="Kriging", legend=:top)
# Reporting the best Surrogate Model
To determine the best surrogate, you can compare their accuracy and performance metrics on the test data. For instance, you can calculate and compare the mean squared error (MSE) or any other relevant metric

```
using Statistics
# Evaluate performance metrics
function calculate_performance_metrics(pred, true_vals)
return mean((pred .- true_vals).^2)
end
# Compare surrogate model performances
mse_loba = calculate_performance_metrics(loba_pred, y_test)
mse_krig = calculate_performance_metrics(krig_pred, y_test)
if mse_loba < mse_krig
println("Lobachevsky Surrogate is the best with MSE: ", mse_loba)
else
println("Kriging Surrogate is the best with MSE: ", mse_krig)
end
```

This structure provides a framework for generating data, training various
surrogate models, evaluating their performance on test data, and reporting
the best surrogate based on performance metrics like MSE. Adjustments can made to suit the specific evaluation criteria or additional surrogate models.

0 comments on commit e75b13a

Please sign in to comment.