-
-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spinachboul update function #457
Changes from 5 commits
07c0131
4358bee
ef4c12b
da68799
2e519c9
c546840
dc946d4
63a72d3
670ecc7
02aabfb
fa8164a
688fee1
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,55 @@ | ||
# GEKPLS Function | ||
|
||
Gradient Enhanced Kriging with Partial Least Squares Method (GEKPLS) is a surrogate modelling technique that brings down computation time and returns improved accuracy for high-dimensional problems. The Julia implementation of GEKPLS is adapted from the Python version by [SMT](https://github.com/SMTorg) which is based on this [paper](https://arxiv.org/pdf/1708.02663.pdf). | ||
|
||
# Modifications for Improved GEKPLS Function: | ||
|
||
To enhance the GEKPLS function, sampling method was changed from ```SobolSample()``` to ```HaltonSample()```. | ||
|
||
|
||
```@example gekpls_water_flow | ||
|
||
using Surrogates | ||
using Zygote | ||
|
||
function water_flow(x) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is just GEKPLS, why would it have a different page? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah I added that in the Pages.jl for the modifications I made in the new file, I will remove it! |
||
r_w = x[1] | ||
r = x[2] | ||
T_u = x[3] | ||
H_u = x[4] | ||
T_l = x[5] | ||
H_l = x[6] | ||
L = x[7] | ||
K_w = x[8] | ||
log_val = log(r/r_w) | ||
return (2*pi*T_u*(H_u - H_l))/ ( log_val*(1 + (2*L*T_u/(log_val*r_w^2*K_w)) + T_u/T_l)) | ||
end | ||
|
||
n = 1000 | ||
lb = [0.05,100,63070,990,63.1,700,1120,9855] | ||
ub = [0.15,50000,115600,1110,116,820,1680,12045] | ||
x = sample(n,lb,ub,HaltonSample()) | ||
grads = gradient.(water_flow, x) | ||
y = water_flow.(x) | ||
n_test = 100 | ||
x_test = sample(n_test,lb,ub,GoldenSample()) | ||
y_true = water_flow.(x_test) | ||
n_comp = 2 | ||
delta_x = 0.0001 | ||
extra_points = 2 | ||
initial_theta = [0.01 for i in 1:n_comp] | ||
g = GEKPLS(x, y, grads, n_comp, delta_x, lb, ub, extra_points, initial_theta) | ||
y_pred = g.(x_test) | ||
rmse = sqrt(sum(((y_pred - y_true).^2)/n_test)) #root mean squared error | ||
println(rmse) #0.0347 | ||
``` | ||
|
||
<br> | ||
<br> | ||
|
||
|
||
|
||
| **Sampling Method** | **RMSE** | **Differences** | | ||
|----------------------|--------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | ||
| **Sobol Sampling** | 0.021472963465423097 | Utilizes digital nets to generate quasi-random numbers, offering low discrepancy points for improved coverage. - Requires careful handling, especially in higher dimensions. | | ||
| **Halton Sampling** | 0.02144270998045834 | Uses a deterministic sequence based on prime numbers to generate points, allowing for quasi-random, low-discrepancy sampling. - Simpler to implement but may exhibit correlations in some dimensions affecting coverage. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is more for QuasiMonteCarlo.jl |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,6 +1,9 @@ | ||
# Tensor product function | ||
The tensor product function is defined as: | ||
``f(x) = \prod_{i=1}^d \cos(a\pi x_i)`` | ||
A tensor product function combines multiple functions or vectors using the tensor product operation. The tensor product is a mathematical operation that takes two vectors and produces another vector space, capturing their joint behavior across multiple dimensions. | ||
Spinachboul marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
For instance, consider a tensor product function defined as follows: | ||
|
||
```\[ f(x) = ∏ᵢ=₁ᵈ cos(aπxᵢ) \]``` | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is this different from what it was before? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No, it is the same just in a more readable form than before, the limits and the function, which we usually write! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Alright, then its fine |
||
|
||
Let's import Surrogates and Plots: | ||
```@example tensor | ||
|
@@ -38,3 +41,16 @@ plot!(xs,f.(xs), label="True function", legend=:top) | |
plot!(xs, loba_1.(xs), label="Lobachevsky", legend=:top) | ||
plot!(xs, krig.(xs), label="Kriging", legend=:top) | ||
``` | ||
|
||
## Kriging Plot | ||
|
||
![kriging](https://github.com/Spinachboul/Surrogates.jl/assets/105979087/906e6688-db47-48be-90d1-ea471aacac16) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't think this is necessary. The plots would be generated from the above code block. Are you doing anything new, as in what code produces this plot? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I didn't do anything new logically in the code, just added in some explanation about the tensor product function which I thought would be relevant and added the plots , i will remove if it is not required!! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, remove them There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @sathvikbhagavan There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't understand the usefulness of the plots. They don't provide any new information either. Look at the current plot in the tutorial - https://docs.sciml.ai/Surrogates/stable/tensor_prod/ There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ok, so for now, I just need to focus on adding mathematical explanations in the tutorials right? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sure. Feel free to open up a PR and continue working. Make it a draft PR and you can push in commits. When you are ready, make it ready for review and ping me. I will take a look. Hopefully you are more comfortable in the ecosystem and how things work in SciML. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah! I am actually getting to know SciML codebase better. Thanks for your guidance throughout this time. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hey @ChrisRackauckas and @sathvikbhagavan I was wondering if this is how we could expand the docs in the tutorials section!! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. And please also suggest models on which I could work! I mean there are many out there but the ones which are important and really catch reader's eye. |
||
|
||
## Lobachevsky Plot | ||
|
||
![lobachevsky](https://github.com/Spinachboul/Surrogates.jl/assets/105979087/678cfc13-0aec-4488-8e4d-39649853ecdd) | ||
|
||
## Combined Plot | ||
|
||
![combined_plot](https://github.com/Spinachboul/Surrogates.jl/assets/105979087/46762f0d-50c5-4d6c-961a-236fd9fb3ad5) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is unrelated? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Oh I see, but the docs already generate these plots. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah, that's right! I attached the plots for reference about Kriging and Lobachevsky individually, and their plotting curves. And can't we attach plots both, in the readme and the docs?? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. No, I don't recommend attaching plots but instead running code to generate them |
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Like we talked about in #456, GEKPLS is independent from sampling method and the notion of improvement in accuracy depends on the original problem itself and what sampling works better. There is no general recipe for making it better. So, I don't think this tutorial is adding any new value and should be removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sathvikbhagavan
Your'e right! If anything, we could have better optimization techniques to improve in gekpls.jl, instead of solely focusing on sampling methods! So, I have removed the file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think you have removed it. You might have forgot to push it. Look at the diff and commits to confirm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
delete
Improvedgekpls.md
and then we can merge this PR.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I have deleted the file
Improvedgekpls.md
Please check once again if all the changes are suitable for merging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ChrisRackauckas and @sathvikbhagavan
Is this PR ready for merging? Or still there are things which need to be modified??