Skip to content

Commit

Permalink
docs: update tutorials
Browse files Browse the repository at this point in the history
  • Loading branch information
sathvikbhagavan committed Mar 7, 2024
1 parent 29882a2 commit 9ba9932
Show file tree
Hide file tree
Showing 10 changed files with 112 additions and 179 deletions.
14 changes: 5 additions & 9 deletions docs/src/tutorials/dae.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,14 +15,14 @@ Let's solve a simple DAE system:

```@example dae
using NeuralPDE
using Random, Flux
using Random
using OrdinaryDiffEq, Statistics
import Lux, OptimizationOptimisers
example = (du, u, p, t) -> [cos(2pi * t) - du[1], u[2] + cos(2pi * t) - du[2]]
u₀ = [1.0, -1.0]
du₀ = [0.0, 0.0]
tspan = (0.0f0, 1.0f0)
tspan = (0.0, 1.0)
```

Differential_vars is a logical array which declares which variables are the differential (non-algebraic) vars
Expand All @@ -33,13 +33,10 @@ differential_vars = [true, false]

```@example dae
prob = DAEProblem(example, du₀, u₀, tspan; differential_vars = differential_vars)
chain = Flux.Chain(Dense(1, 15, cos), Dense(15, 15, sin), Dense(15, 2))
chain = Lux.Chain(Lux.Dense(1, 15, cos), Lux.Dense(15, 15, sin), Lux.Dense(15, 2))
opt = OptimizationOptimisers.Adam(0.1)
alg = NNDAE(chain, opt; autodiff = false)
sol = solve(prob,
alg, verbose = false, dt = 1 / 100.0f0,
maxiters = 3000, abstol = 1.0f-10)
sol = solve(prob, alg, verbose = true, dt = 1 / 100.0, maxiters = 3000, abstol = 1e-10)
```

Now lets compare the predictions from the learned network with the ground truth which we can obtain by numerically solving the DAE.
Expand All @@ -50,8 +47,7 @@ function example1(du, u, p, t)
du[2] = u[2] + cos(2pi * t)
nothing
end
M = [1.0 0
0 0]
M = [1.0 0.0; 0.0 0.0]
f = ODEFunction(example1, mass_matrix = M)
prob_mm = ODEProblem(f, u₀, tspan)
ground_sol = solve(prob_mm, Rodas5(), reltol = 1e-8, abstol = 1e-8)
Expand Down
27 changes: 6 additions & 21 deletions docs/src/tutorials/derivative_neural_network.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,9 +53,9 @@ using the second numeric derivative `Dt(Dtu1(t,x))`.

```@example derivativenn
using NeuralPDE, Lux, ModelingToolkit
using Optimization, OptimizationOptimisers, OptimizationOptimJL
using Optimization, OptimizationOptimisers, OptimizationOptimJL, LineSearches
using Plots
import ModelingToolkit: Interval, infimum, supremum
using ModelingToolkit: Interval, infimum, supremum
@parameters t, x
Dt = Differential(t)
Expand All @@ -71,8 +71,6 @@ bcs_ = [u1(0.0, x) ~ sin(pi * x),
u2(0.0, x) ~ cos(pi * x),
Dt(u1(0, x)) ~ -sin(pi * x),
Dt(u2(0, x)) ~ -cos(pi * x),
#Dtu1(0,x) ~ -sin(pi*x),
# Dtu2(0,x) ~ -cos(pi*x),
u1(t, 0.0) ~ 0.0,
u2(t, 0.0) ~ exp(-t),
u1(t, 1.0) ~ 0.0,
Expand All @@ -93,9 +91,8 @@ input_ = length(domains)
n = 15
chain = [Lux.Chain(Dense(input_, n, Lux.σ), Dense(n, n, Lux.σ), Dense(n, 1)) for _ in 1:7]
training_strategy = NeuralPDE.QuadratureTraining()
discretization = NeuralPDE.PhysicsInformedNN(chain,
training_strategy)
training_strategy = NeuralPDE.QuadratureTraining(; batch = 200, reltol = 1e-6, abstol = 1e-6)
discretization = NeuralPDE.PhysicsInformedNN(chain, training_strategy)
vars = [u1(t, x), u2(t, x), u3(t, x), Dxu1(t, x), Dtu1(t, x), Dxu2(t, x), Dtu2(t, x)]
@named pdesystem = PDESystem(eqs_, bcs__, domains, [t, x], vars)
Expand All @@ -116,7 +113,7 @@ end
res = Optimization.solve(prob, OptimizationOptimisers.Adam(0.01); callback = callback, maxiters = 2000)
prob = remake(prob, u0 = res.u)
res = Optimization.solve(prob, BFGS(); callback = callback, maxiters = 10000)
res = Optimization.solve(prob, LBFGS(linesearch = BackTracking()); callback = callback, maxiters = 1000)
phi = discretization.phi
```
Expand All @@ -136,6 +133,7 @@ Dxu1_real(t, x) = pi * exp(-t) * cos(pi * x)
Dtu1_real(t, x) = -exp(-t) * sin(pi * x)
Dxu2_real(t, x) = -pi * exp(-t) * sin(pi * x)
Dtu2_real(t, x) = -exp(-t) * cos(pi * x)
function analytic_sol_func_all(t, x)
[u1_real(t, x), u2_real(t, x), u3_real(t, x),
Dxu1_real(t, x), Dtu1_real(t, x), Dxu2_real(t, x), Dtu2_real(t, x)]
Expand All @@ -151,18 +149,5 @@ for i in 1:7
p2 = plot(ts, xs, u_predict[i], linetype = :contourf, title = "predict")
p3 = plot(ts, xs, diff_u[i], linetype = :contourf, title = "error")
plot(p1, p2, p3)
savefig("3sol_ub$i")
end
```

![aprNN_sol_u1](https://user-images.githubusercontent.com/12683885/122998551-de79d600-d3b5-11eb-8f5d-59d00178c2ab.png)

![aprNN_sol_u2](https://user-images.githubusercontent.com/12683885/122998567-e3d72080-d3b5-11eb-9024-4072f4b66cda.png)

![aprNN_sol_u3](https://user-images.githubusercontent.com/12683885/122998578-e6d21100-d3b5-11eb-96a5-f64e5593b35e.png)

## Comparison of the second numerical derivative and numerical + neural network derivative

![DDu1](https://user-images.githubusercontent.com/12683885/123113394-3280cb00-d447-11eb-88e3-a8541bbf089f.png)

![DDu2](https://user-images.githubusercontent.com/12683885/123113413-36ace880-d447-11eb-8f6a-4c3caa86e359.png)
39 changes: 18 additions & 21 deletions docs/src/tutorials/gpu.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,24 +29,28 @@ we must ensure that our initial parameters for the neural network are on the GPU
is done, then the internal computations will all take place on the GPU. This is done by
using the `gpu` function on the initial parameters, like:

```julia
using Lux, ComponentArrays
```@example gpu
using Lux, LuxCUDA, ComponentArrays
const gpud = gpu_device()
inner = 25
chain = Chain(Dense(3, inner, Lux.σ),
Dense(inner, inner, Lux.σ),
Dense(inner, inner, Lux.σ),
Dense(inner, inner, Lux.σ),
Dense(inner, 1))
ps = Lux.setup(Random.default_rng(), chain)[1]
ps = ps |> ComponentArray |> gpu .|> Float64
ps = ps |> ComponentArray |> gpud .|> Float64
```

In total, this looks like:

```julia
using NeuralPDE, Lux, CUDA, Random, ComponentArrays
```@example gpu
using NeuralPDE, Lux, LuxCUDA, Random, ComponentArrays
using Optimization
using OptimizationOptimisers
import ModelingToolkit: Interval
using Plots
using Printf
@parameters t x y
@variables u(..)
Expand Down Expand Up @@ -84,9 +88,9 @@ chain = Chain(Dense(3, inner, Lux.σ),
Dense(inner, inner, Lux.σ),
Dense(inner, 1))
strategy = QuadratureTraining()
strategy = QuasiRandomTraining(100)
ps = Lux.setup(Random.default_rng(), chain)[1]
ps = ps |> ComponentArray |> gpu .|> Float64
ps = ps |> ComponentArray |> gpud .|> Float64
discretization = PhysicsInformedNN(chain,
strategy,
init_params = ps)
Expand All @@ -100,35 +104,31 @@ callback = function (p, l)
return false
end
res = Optimization.solve(prob, Adam(0.01); callback = callback, maxiters = 2500)
res = Optimization.solve(prob, OptimizationOptimisers.Adam(1e-2); callback = callback, maxiters = 2500)
```

We then use the `remake` function to rebuild the PDE problem to start a new
optimization at the optimized parameters, and continue with a lower learning rate:
We then use the `remake` function to rebuild the PDE problem to start a new optimization at the optimized parameters, and continue with a lower learning rate:

```julia
```@example gpu
prob = remake(prob, u0 = res.u)
res = Optimization.solve(prob, Adam(0.001); callback = callback, maxiters = 2500)
res = Optimization.solve(prob, OptimizationOptimisers.Adam(1e-3); callback = callback, maxiters = 2500)
```

Finally, we inspect the solution:

```julia
```@example gpu
phi = discretization.phi
ts, xs, ys = [infimum(d.domain):0.1:supremum(d.domain) for d in domains]
u_real = [analytic_sol_func(t, x, y) for t in ts for x in xs for y in ys]
u_predict = [first(Array(phi(gpu([t, x, y]), res.u))) for t in ts for x in xs for y in ys]

using Plots
using Printf
u_predict = [first(Array(phi([t, x, y], res.u))) for t in ts for x in xs for y in ys]
function plot_(res)
# Animate
anim = @animate for (i, t) in enumerate(0:0.05:t_max)
@info "Animating frame $i..."
u_real = reshape([analytic_sol_func(t, x, y) for x in xs for y in ys],
(length(xs), length(ys)))
u_predict = reshape([Array(phi(gpu([t, x, y]), res.u))[1] for x in xs for y in ys],
u_predict = reshape([Array(phi([t, x, y], res.u))[1] for x in xs for y in ys],
length(xs), length(ys))
u_error = abs.(u_predict .- u_real)
title = @sprintf("predict, t = %.3f", t)
Expand All @@ -145,8 +145,6 @@ end
plot_(res)
```

![3pde](https://user-images.githubusercontent.com/12683885/129949743-9471d230-c14f-4105-945f-6bc52677d40e.gif)

## Performance benchmarks

Here are some performance benchmarks for 2d-pde with various number of input points and the
Expand All @@ -155,7 +153,6 @@ runtime with GPU and CPU.

```julia
julia> CUDA.device()

```

![image](https://user-images.githubusercontent.com/12683885/110297207-49202500-8004-11eb-9e45-d4cb28045d87.png)
10 changes: 4 additions & 6 deletions docs/src/tutorials/integro_diff.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,8 +45,9 @@ u(0) = 0
```

```@example integro
using NeuralPDE, Flux, ModelingToolkit, Optimization, OptimizationOptimJL, DomainSets
import ModelingToolkit: Interval, infimum, supremum
using NeuralPDE, Lux, ModelingToolkit, Optimization, OptimizationOptimJL, DomainSets
using ModelingToolkit: Interval, infimum, supremum
using Plots
@parameters t
@variables i(..)
Expand All @@ -55,7 +56,7 @@ Ii = Integral(t in DomainSets.ClosedInterval(0, t))
eq = Di(i(t)) + 2 * i(t) + 5 * Ii(i(t)) ~ 1
bcs = [i(0.0) ~ 0.0]
domains = [t ∈ Interval(0.0, 2.0)]
chain = Chain(Dense(1, 15, Flux.σ), Dense(15, 1)) |> f64
chain = Lux.Chain(Lux.Dense(1, 15, Lux.σ), Lux.Dense(15, 1))
strategy_ = QuadratureTraining()
discretization = PhysicsInformedNN(chain,
Expand All @@ -78,9 +79,6 @@ u_predict = [first(phi([t], res.u)) for t in ts]
analytic_sol_func(t) = 1 / 2 * (exp(-t)) * (sin(2 * t))
u_real = [analytic_sol_func(t) for t in ts]
using Plots
plot(ts, u_real, label = "Analytical Solution")
plot!(ts, u_predict, label = "PINN Solution")
```

![IDE](https://user-images.githubusercontent.com/12683885/129749371-18b44bbc-18c8-49c5-bf30-0cd97ecdd977.png)
6 changes: 1 addition & 5 deletions docs/src/tutorials/low_level.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ with Physics-Informed Neural Networks. Here is an example of using the low-level

```@example low_level
using NeuralPDE, Lux, ModelingToolkit, Optimization, OptimizationOptimJL
import ModelingToolkit: Interval, infimum, supremum
using ModelingToolkit: Interval, infimum, supremum
@parameters t, x
@variables u(..)
Expand Down Expand Up @@ -87,7 +87,3 @@ p2 = plot(xs, u_predict[11], title = "t = 0.5");
p3 = plot(xs, u_predict[end], title = "t = 1");
plot(p1, p2, p3)
```

![burgers](https://user-images.githubusercontent.com/12683885/90984874-a0870800-e580-11ea-9fd4-af8a4e3c523e.png)

![burgers2](https://user-images.githubusercontent.com/12683885/90984856-8c430b00-e580-11ea-9206-1a88ebd24ca0.png)
Loading

0 comments on commit 9ba9932

Please sign in to comment.