-
-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update tensor_prod.md #464
Closed
Closed
Changes from all commits
Commits
Show all changes
16 commits
Select commit
Hold shift + click to select a range
13940fc
Update tensor_prod.md
Spinachboul e75b13a
Update tensor_prod.md
Spinachboul acc001b
Update tensor_prod.md
Spinachboul 7849439
Update tensor_prod.md
Spinachboul 3013518
Update tensor_prod.md
Spinachboul 0e7bf66
Update tensor_prod.md
Spinachboul c605735
Update tensor_prod.md
Spinachboul 1307c4e
Update tensor_prod.md
Spinachboul 3cab1bf
Update tensor_prod.md
Spinachboul 93b10ce
Update tensor_prod.md
Spinachboul 7fec363
Update tensor_prod.md
Spinachboul 6b07943
Update tensor_prod.md
Spinachboul 3265f6c
Update tensor_prod.md
Spinachboul fe33b1d
Update tensor_prod.md
Spinachboul 33880bf
Update tensor_prod.md
Spinachboul af820a3
Update tensor_prod.md
Spinachboul File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,40 +1,126 @@ | ||
# Tensor product function | ||
The tensor product function is defined as: | ||
``f(x) = \prod_{i=1}^d \cos(a\pi x_i)`` | ||
``\[ f(x) = ∏ᵢ=₁ᵈ cos(aπxᵢ) \]`` | ||
|
||
Let's import Surrogates and Plots: | ||
Where\ | ||
d: Represents the dimensionality of the input vector x\ | ||
xi: Represents the ith components of the input vector\ | ||
a: A constant parameter | ||
|
||
# Let's import Surrogates and Plots | ||
```@example tensor | ||
using Surrogates | ||
using Plots | ||
using Statistics | ||
using SurrogatesPolyChaos | ||
using SurrogatesRandomForest | ||
default() | ||
``` | ||
|
||
Define the 1D objective function: | ||
# Define the function | ||
|
||
```@example tensor | ||
function tensor_product_function(x) | ||
a = 0.5 | ||
return prod(cos.(a*pi*x)) | ||
end | ||
``` | ||
|
||
# Sampling parameters for training and test data | ||
```@example tensor | ||
lb = -5.0 # Lower bound of sampling range | ||
ub = 5.0 # Upper bound of sampling range | ||
n = 30 # Number of training points | ||
``` | ||
|
||
# Generate training and test data | ||
```@example tensor | ||
x_train = sample(n, lb, ub, SobolSample()) # Sample training data points | ||
y_train = tensor_product_function.(x_train) # Calculate corresponding function values | ||
x_test = sample(1000, lb, ub, RandomSample()) # Sample larger test data set | ||
y_test = tensor_product_function.(x_test) # Calculate corresponding true function values | ||
``` | ||
|
||
# Plot training and testing points | ||
```@example tensor | ||
scatter(x_train, y_train, label="Training Points", xlabel="X-axis", ylabel="Y-axis", legend=:topright) | ||
scatter!(x_test, y_test, label="Testing Points") | ||
``` | ||
|
||
# Train the following Surrogates: | ||
## Kriging | Lobachevsky | Radial Basis | RandomForest | Polynomial Chaos | ||
```@example tensor | ||
function f(x) | ||
a = 0.5; | ||
return cos(a*pi*x) | ||
num_round = 2 | ||
alpha = 2.0 | ||
n = 6 | ||
randomforest_surrogate = RandomForestSurrogate(x_train ,y_train ,lb, ub, num_round = 2) | ||
radial_surrogate = RadialBasis(x_train, y_train, lb, ub) | ||
lobachevsky_surrogate = LobachevskySurrogate(x_train, y_train, lb, ub, alpha = 2.0, n = 6) | ||
kriging_surrogate = Kriging(x_train, y_train, lb, ub) | ||
poly1 = PolynomialChaosSurrogate(x_train,y_train,lb,ub) | ||
poly2 = PolynomialChaosSurrogate(x_train,y_train,lb,ub, op = SurrogatesPolyChaos.GaussOrthoPoly(5)) | ||
``` | ||
|
||
# Obtain predictions from all surrogates for the test data | ||
```@example tensor | ||
loba_pred = lobachevsky_surrogate.(x_test) | ||
radial_pred = radial_surrogate.(x_test) | ||
kriging_pred = kriging_surrogate.(x_test) | ||
random_forest_pred = randomforest_surrogate.(x_test) | ||
poly1_pred = poly1.(x_test) | ||
poly2_pred = poly2.(x_test) | ||
``` | ||
|
||
# Define a function to calculate Mean Squared Error (MSE) | ||
```@example tensor | ||
function calculate_mse(predictions, true_values) | ||
return mean((predictions .- true_values).^2) # Calculate mean of squared errors | ||
end | ||
``` | ||
|
||
# Calculate MSE for all Surrogate Models | ||
```@example tensor | ||
n = 30 | ||
lb = -5.0 | ||
ub = 5.0 | ||
a = 0.5 | ||
x = sample(n, lb, ub, SobolSample()) | ||
y = f.(x) | ||
xs = lb:0.001:ub | ||
scatter(x, y, label="Sampled points", xlims=(lb, ub), ylims=(-1, 1), legend=:top) | ||
plot!(xs, f.(xs), label="True function", legend=:top) | ||
mse_loba = calculate_mse(loba_pred, y_test) | ||
mse_krig = calculate_mse(kriging_pred, y_test) | ||
mse_radial = calculate_mse(radial_pred, y_test) | ||
mse_rf = calculate_mse(random_forest_pred, y_test) | ||
mse_poly1 = calculate_mse(poly1_pred, y_test) | ||
mse_poly2 = calculate_mse(poly2_pred, y_test) | ||
``` | ||
|
||
Fitting and plotting different surrogates: | ||
# Compare the performance of all Surrogate Models | ||
```@example tensor | ||
loba_1 = LobachevskySurrogate(x, y, lb, ub) | ||
krig = Kriging(x, y, lb, ub) | ||
scatter(x, y, label="Sampled points", xlims=(lb, ub), ylims=(-2.5, 2.5), legend=:bottom) | ||
plot!(xs,f.(xs), label="True function", legend=:top) | ||
plot!(xs, loba_1.(xs), label="Lobachevsky", legend=:top) | ||
plot!(xs, krig.(xs), label="Kriging", legend=:top) | ||
mse_values = Dict("loba" => mse_loba, "krig" => mse_krig, "radial" => mse_radial, "rf" => mse_rf, "poly1" => mse_poly1, "poly2" => mse_poly2) | ||
|
||
# Sort the MSE values in ascending order and display them | ||
sorted_mse = sort(collect(mse_values), by=x->x[2]) | ||
for (model, mse) in sorted_mse | ||
println("$model : $mse") | ||
end | ||
``` | ||
|
||
# Plot true function vs. model predictions | ||
```@example tensor | ||
xs = lb:0.01:ub | ||
plot(xs, tensor_product_function.(xs), label="True function", legend=:top, color=:black) | ||
plot!(xs, lobachevsky_surrogate.(xs), label="Lobachevsky", legend=:top, color=:red) | ||
plot!(xs, kriging_surrogate.(xs), label="Kriging", legend=:top, color=:blue) | ||
plot!(xs, randomforest_surrogate.(xs), label="Random Forest", legend=:top, color=:green) | ||
plot!(xs, poly1.(xs), label="Polynomial Chaos", legend=:top, color=:purple) | ||
plot!(xs, poly2.(xs), label="Polynomial Chaos", legend=:top, color=:purple) | ||
plot!(xs, radial_surrogate.(xs), label="Radials", legend=:top, color=:orange) | ||
``` | ||
|
||
# Tabular Representation of all Surrogates and their MSE Scores | ||
|
||
| Surrogate Model | MSE Score | | ||
|-------------------|----------------------| | ||
| Kriging | 4.70493378010316e-5 | | ||
| Lobachevsky | 7.967792682690972e-5| | ||
| Radial Basis | 0.004972603698976124 | | ||
| RandomForest | 0.2023233139232778 | | ||
| Poly1 | 0.4124881232761028 | | ||
| Poly2 | 0.42166909818265136 | | ||
|
||
|
||
This structure provides a framework for generating data, training various surrogate models, evaluating their performance on test data, and reporting the best surrogate based on performance metrics like MSE. Adjustments can made to suit the specific evaluation criteria or additional surrogate models. | ||
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't make any sense here. What does this have to do with the tensor product function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, this is what we can do for all the benchmarks , and this was suggested by @sathvikbhagavan that we should analyse all the models and choose the best one for a specific benchmark.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
benchmarks wouldn't go into the documentation, that would go into the SciMLBenchmarks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, the solution shouldn't be hardcoded but instead generated by computation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generated as in using the Documenter.jl??
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh ok! But then what should I do for this PR? I mean we can still include choosing the best surrogate for a specific benchmark right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also I am not very much aware of how do we use Weave? You have any tutorial or something??
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just look at the repo instructions https://github.com/SciML/SciMLBenchmarks.jl?tab=readme-ov-file#contributing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh ok! And what should I do with this PR? should I generate the code blocks using weave right now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make a weave jmd and open a PR to SciMLBenchmarks