Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: update and cleanup tutorials and examples #815

Merged
merged 14 commits into from
Mar 14, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next Next commit
docs: update callback and qualify Adam from OptimizationOptimisers
sathvikbhagavan committed Mar 13, 2024
commit aa203ba277567738224ef6591efcde3102b10726
4 changes: 1 addition & 3 deletions docs/src/examples/3rd.md
Original file line number Diff line number Diff line change
@@ -47,7 +47,7 @@ callback = function (p, l)
return false
end

res = Optimization.solve(prob, Adam(0.01); callback = callback, maxiters = 2000)
res = Optimization.solve(prob, OptimizationOptimisers.Adam(0.01); callback = callback, maxiters = 2000)
phi = discretization.phi
```

@@ -67,5 +67,3 @@ x_plot = collect(xs)
plot(x_plot, u_real, title = "real")
plot!(x_plot, u_predict, title = "predict")
```

![hodeplot](https://user-images.githubusercontent.com/12683885/90276340-69bc3e00-de6c-11ea-89a7-7d291123a38b.png)
4 changes: 2 additions & 2 deletions docs/src/examples/linear_parabolic.md
Original file line number Diff line number Diff line change
@@ -85,8 +85,8 @@ global iteration = 0
callback = function (p, l)
if iteration % 10 == 0
println("loss: ", l)
println("pde_losses: ", map(l_ -> l_(p), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p), bcs_inner_loss_functions))
println("pde_losses: ", map(l_ -> l_(p.u), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p.u), bcs_inner_loss_functions))
end
global iteration += 1
return false
6 changes: 3 additions & 3 deletions docs/src/examples/nonlinear_elliptic.md
Original file line number Diff line number Diff line change
@@ -99,9 +99,9 @@ global iteration = 0
callback = function (p, l)
if iteration % 10 == 0
println("loss: ", l)
println("pde_losses: ", map(l_ -> l_(p), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p), bcs_inner_loss_functions))
println("der_losses: ", map(l_ -> l_(p), aprox_derivative_loss_functions))
println("pde_losses: ", map(l_ -> l_(p.u), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p.u), bcs_inner_loss_functions))
println("der_losses: ", map(l_ -> l_(p.u), aprox_derivative_loss_functions))
end
global iteration += 1
return false
4 changes: 2 additions & 2 deletions docs/src/examples/nonlinear_hyperbolic.md
Original file line number Diff line number Diff line change
@@ -94,8 +94,8 @@ bcs_inner_loss_functions = sym_prob.loss_functions.bc_loss_functions

callback = function (p, l)
println("loss: ", l)
println("pde_losses: ", map(l_ -> l_(p), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p), bcs_inner_loss_functions))
println("pde_losses: ", map(l_ -> l_(p.u), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p.u), bcs_inner_loss_functions))
return false
end

6 changes: 2 additions & 4 deletions docs/src/examples/wave.md
Original file line number Diff line number Diff line change
@@ -81,8 +81,6 @@ p3 = plot(ts, xs, diff_u, linetype = :contourf, title = "error");
plot(p1, p2, p3)
```

![waveplot](https://user-images.githubusercontent.com/12683885/101984293-74a7a380-3c91-11eb-8e78-72a50d88e3f8.png)

## 1D Damped Wave Equation with Dirichlet boundary conditions

Now let's solve the 1-dimensional wave equation with damping.
@@ -159,8 +157,8 @@ bcs_inner_loss_functions = sym_prob.loss_functions.bc_loss_functions

callback = function (p, l)
println("loss: ", l)
println("pde_losses: ", map(l_ -> l_(p), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p), bcs_inner_loss_functions))
println("pde_losses: ", map(l_ -> l_(p.u), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p.u), bcs_inner_loss_functions))
return false
end

6 changes: 3 additions & 3 deletions docs/src/tutorials/constraints.md
Original file line number Diff line number Diff line change
@@ -78,9 +78,9 @@ aprox_derivative_loss_functions = sym_prob.loss_functions.bc_loss_functions

cb_ = function (p, l)
println("loss: ", l)
println("pde_losses: ", map(l_ -> l_(p), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p), bcs_inner_loss_functions))
println("additional_loss: ", norm_loss_function(phi, p, nothing))
println("pde_losses: ", map(l_ -> l_(p.u), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p.u), bcs_inner_loss_functions))
println("additional_loss: ", norm_loss_function(phi, p.u, nothing))
return false
end

4 changes: 2 additions & 2 deletions docs/src/tutorials/dae.md
Original file line number Diff line number Diff line change
@@ -16,8 +16,8 @@ Let's solve a simple DAE system:
```@example dae
using NeuralPDE
using Random, Flux
using OrdinaryDiffEq, Optimisers, Statistics
import Lux, OptimizationOptimisers, OptimizationOptimJL
using OrdinaryDiffEq, Statistics
import Lux, OptimizationOptimisers

example = (du, u, p, t) -> [cos(2pi * t) - du[1], u[2] + cos(2pi * t) - du[2]]
u₀ = [1.0, -1.0]
8 changes: 4 additions & 4 deletions docs/src/tutorials/derivative_neural_network.md
Original file line number Diff line number Diff line change
@@ -108,13 +108,13 @@ aprox_derivative_loss_functions = sym_prob.loss_functions.bc_loss_functions[9:en

callback = function (p, l)
println("loss: ", l)
println("pde_losses: ", map(l_ -> l_(p), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p), bcs_inner_loss_functions))
println("der_losses: ", map(l_ -> l_(p), aprox_derivative_loss_functions))
println("pde_losses: ", map(l_ -> l_(p.u), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p.u), bcs_inner_loss_functions))
println("der_losses: ", map(l_ -> l_(p.u), aprox_derivative_loss_functions))
return false
end

res = Optimization.solve(prob, Adam(0.01); callback = callback, maxiters = 2000)
res = Optimization.solve(prob, OptimizationOptimisers.Adam(0.01); callback = callback, maxiters = 2000)
prob = remake(prob, u0 = res.u)
res = Optimization.solve(prob, BFGS(); callback = callback, maxiters = 10000)

4 changes: 2 additions & 2 deletions docs/src/tutorials/low_level.md
Original file line number Diff line number Diff line change
@@ -53,8 +53,8 @@ bc_loss_functions = sym_prob.loss_functions.bc_loss_functions

callback = function (p, l)
println("loss: ", l)
println("pde_losses: ", map(l_ -> l_(p), pde_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p), bc_loss_functions))
println("pde_losses: ", map(l_ -> l_(p.u), pde_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p.u), bc_loss_functions))
return false
end

14 changes: 4 additions & 10 deletions docs/src/tutorials/systems.md
Original file line number Diff line number Diff line change
@@ -79,8 +79,8 @@ bcs_inner_loss_functions = sym_prob.loss_functions.bc_loss_functions

callback = function (p, l)
println("loss: ", l)
println("pde_losses: ", map(l_ -> l_(p), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p), bcs_inner_loss_functions))
println("pde_losses: ", map(l_ -> l_(p.u), pde_inner_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p.u), bcs_inner_loss_functions))
return false
end

@@ -138,8 +138,8 @@ bc_loss_functions = sym_prob.loss_functions.bc_loss_functions

callback = function (p, l)
println("loss: ", l)
println("pde_losses: ", map(l_ -> l_(p), pde_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p), bc_loss_functions))
println("pde_losses: ", map(l_ -> l_(p.u), pde_loss_functions))
println("bcs_losses: ", map(l_ -> l_(p.u), bc_loss_functions))
return false
end

@@ -183,12 +183,6 @@ for i in 1:3
end
```

![sol_uq1](https://user-images.githubusercontent.com/12683885/122979254-03634e80-d3a0-11eb-985b-d3bae2dddfde.png)

![sol_uq2](https://user-images.githubusercontent.com/12683885/122979278-09592f80-d3a0-11eb-8fee-de3652f138d8.png)

![sol_uq3](https://user-images.githubusercontent.com/12683885/122979288-0e1de380-d3a0-11eb-9005-bfb501959b83.png)

Notice here that the solution is represented in the `OptimizationSolution` with `u` as
the parameters for the trained neural network. But, for the case where the neural network
is from Lux.jl, it's given as a `ComponentArray` where `res.u.depvar.x` corresponds to the result