Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spinachboul update function #457

Closed
wants to merge 12 commits into from
1 change: 1 addition & 0 deletions docs/pages.jl
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ pages = ["index.md"
"Variable Fidelity" => "variablefidelity.md",
"Gradient Enhanced Kriging" => "gek.md",
"GEKPLS" => "gekpls.md",
"Improved GEKPLS" => "Improvedgekpls.md",
"MOE" => "moe.md",
"Parallel Optimization" => "parallel.md",
]
Expand Down
55 changes: 55 additions & 0 deletions docs/src/Improvedgekpls.md
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like we talked about in #456, GEKPLS is independent from sampling method and the notion of improvement in accuracy depends on the original problem itself and what sampling works better. There is no general recipe for making it better. So, I don't think this tutorial is adding any new value and should be removed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sathvikbhagavan
Your'e right! If anything, we could have better optimization techniques to improve in gekpls.jl, instead of solely focusing on sampling methods! So, I have removed the file.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think you have removed it. You might have forgot to push it. Look at the diff and commits to confirm.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

delete Improvedgekpls.md and then we can merge this PR.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I have deleted the file Improvedgekpls.md
Please check once again if all the changes are suitable for merging.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ChrisRackauckas and @sathvikbhagavan
Is this PR ready for merging? Or still there are things which need to be modified??

Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
# GEKPLS Function

Gradient Enhanced Kriging with Partial Least Squares Method (GEKPLS) is a surrogate modelling technique that brings down computation time and returns improved accuracy for high-dimensional problems. The Julia implementation of GEKPLS is adapted from the Python version by [SMT](https://github.com/SMTorg) which is based on this [paper](https://arxiv.org/pdf/1708.02663.pdf).

# Modifications for Improved GEKPLS Function:

To enhance the GEKPLS function, sampling method was changed from ```SobolSample()``` to ```HaltonSample()```.


```@example gekpls_water_flow

using Surrogates
using Zygote

function water_flow(x)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just GEKPLS, why would it have a different page?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I added that in the Pages.jl for the modifications I made in the new file, I will remove it!

r_w = x[1]
r = x[2]
T_u = x[3]
H_u = x[4]
T_l = x[5]
H_l = x[6]
L = x[7]
K_w = x[8]
log_val = log(r/r_w)
return (2*pi*T_u*(H_u - H_l))/ ( log_val*(1 + (2*L*T_u/(log_val*r_w^2*K_w)) + T_u/T_l))
end

n = 1000
lb = [0.05,100,63070,990,63.1,700,1120,9855]
ub = [0.15,50000,115600,1110,116,820,1680,12045]
x = sample(n,lb,ub,HaltonSample())
grads = gradient.(water_flow, x)
y = water_flow.(x)
n_test = 100
x_test = sample(n_test,lb,ub,GoldenSample())
y_true = water_flow.(x_test)
n_comp = 2
delta_x = 0.0001
extra_points = 2
initial_theta = [0.01 for i in 1:n_comp]
g = GEKPLS(x, y, grads, n_comp, delta_x, lb, ub, extra_points, initial_theta)
y_pred = g.(x_test)
rmse = sqrt(sum(((y_pred - y_true).^2)/n_test)) #root mean squared error
println(rmse) #0.0347
```

<br>
<br>



| **Sampling Method** | **RMSE** | **Differences** |
|----------------------|--------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| **Sobol Sampling** | 0.021472963465423097 | Utilizes digital nets to generate quasi-random numbers, offering low discrepancy points for improved coverage. - Requires careful handling, especially in higher dimensions. |
| **Halton Sampling** | 0.02144270998045834 | Uses a deterministic sequence based on prime numbers to generate points, allowing for quasi-random, low-discrepancy sampling. - Simpler to implement but may exhibit correlations in some dimensions affecting coverage.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is more for QuasiMonteCarlo.jl

20 changes: 18 additions & 2 deletions docs/src/tensor_prod.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
# Tensor product function
The tensor product function is defined as:
``f(x) = \prod_{i=1}^d \cos(a\pi x_i)``
A tensor product function combines multiple functions or vectors using the tensor product operation. The tensor product is a mathematical operation that takes two vectors and produces another vector space, capturing their joint behavior across multiple dimensions.
Spinachboul marked this conversation as resolved.
Show resolved Hide resolved

For instance, consider a tensor product function defined as follows:

```\[ f(x) = ∏ᵢ=₁ᵈ cos(aπxᵢ) \]```
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this different from what it was before?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, it is the same just in a more readable form than before, the limits and the function, which we usually write!
As earlier it was a bit confusing as to what the actual function is!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, then its fine


Let's import Surrogates and Plots:
```@example tensor
Expand Down Expand Up @@ -38,3 +41,16 @@ plot!(xs,f.(xs), label="True function", legend=:top)
plot!(xs, loba_1.(xs), label="Lobachevsky", legend=:top)
plot!(xs, krig.(xs), label="Kriging", legend=:top)
```

## Kriging Plot

![kriging](https://github.com/Spinachboul/Surrogates.jl/assets/105979087/906e6688-db47-48be-90d1-ea471aacac16)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is necessary. The plots would be generated from the above code block. Are you doing anything new, as in what code produces this plot?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't do anything new logically in the code, just added in some explanation about the tensor product function which I thought would be relevant and added the plots , i will remove if it is not required!!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, remove them

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sathvikbhagavan
But then, shouldn't we add more mathematical explanation to improve our readme file, simply stating the function would be too vague for the viewers!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't understand the usefulness of the plots. They don't provide any new information either. Look at the current plot in the tutorial - https://docs.sciml.ai/Surrogates/stable/tensor_prod/

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, so for now, I just need to focus on adding mathematical explanations in the tutorials right?
So soon I'll share a sample file (won't hit a PR for this) and then request your approval if it is going along the correct lines.
Please then suggest to me the correct blend of theory and math explanations which would enable me to work efficiently.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure. Feel free to open up a PR and continue working. Make it a draft PR and you can push in commits. When you are ready, make it ready for review and ping me. I will take a look. Hopefully you are more comfortable in the ecosystem and how things work in SciML.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah! I am actually getting to know SciML codebase better. Thanks for your guidance throughout this time.

Copy link
Contributor Author

@Spinachboul Spinachboul Jan 5, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @ChrisRackauckas and @sathvikbhagavan
Please let me know if this explanation suits well with the Kriging Model!

pic2

pic1

I was wondering if this is how we could expand the docs in the tutorials section!!
(Please forgive for the bad handwriting!! 😅)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And please also suggest models on which I could work! I mean there are many out there but the ones which are important and really catch reader's eye.


## Lobachevsky Plot

![lobachevsky](https://github.com/Spinachboul/Surrogates.jl/assets/105979087/678cfc13-0aec-4488-8e4d-39649853ecdd)

## Combined Plot

![combined_plot](https://github.com/Spinachboul/Surrogates.jl/assets/105979087/46762f0d-50c5-4d6c-961a-236fd9fb3ad5)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is unrelated?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I see, but the docs already generate these plots.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, that's right! I attached the plots for reference about Kriging and Lobachevsky individually, and their plotting curves. And can't we attach plots both, in the readme and the docs??

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, I don't recommend attaching plots but instead running code to generate them