Skip to content

Commit

Permalink
Merge branch 'SciML:master' into research
Browse files Browse the repository at this point in the history
  • Loading branch information
AstitvaAggarwal authored Apr 29, 2024
2 parents fad53ff + a1b3125 commit 3bb93dd
Show file tree
Hide file tree
Showing 7 changed files with 9 additions and 9 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/SpellCheck.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,4 @@ jobs:
- name: Checkout Actions Repository
uses: actions/checkout@v4
- name: Check spelling
uses: crate-ci/[email protected].4
uses: crate-ci/[email protected].9
2 changes: 1 addition & 1 deletion docs/src/examples/complex.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ alg = NNODE(chain, opt, ps; strategy = StochasticTraining(500))
sol = solve(problem, alg, verbose = false, maxiters = 5000, saveat = 0.01)
```

Now, lets plot the predictions.
Now, let's plot the predictions.

`u1`:

Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/Lotka_Volterra_BPINNs.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ plot!(time, y, label = "noisy y")
plot!(solution, labels = ["x" "y"])
```

Lets define a PINN.
Let's define a PINN.

```@example bpinn
# Neural Networks must have 2 outputs as u -> [dx,dy] in function lotka_volterra()
Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/dae.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ alg = NNDAE(chain, opt; autodiff = false)
sol = solve(prob, alg, verbose = true, dt = 1 / 100.0, maxiters = 3000, abstol = 1e-10)
```

Now lets compare the predictions from the learned network with the ground truth which we can obtain by numerically solving the DAE.
Now let's compare the predictions from the learned network with the ground truth which we can obtain by numerically solving the DAE.

```@example dae
function example1(du, u, p, t)
Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/ode.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ Once these pieces are together, we call `solve` just like with any other `ODEPro
sol = solve(prob, alg, verbose = true, maxiters = 2000, saveat = 0.01)
```

Now lets compare the predictions from the learned network with the ground truth which we can obtain by numerically solving the ODE.
Now let's compare the predictions from the learned network with the ground truth which we can obtain by numerically solving the ODE.

```@example nnode1
using OrdinaryDiffEq, Plots
Expand Down
6 changes: 3 additions & 3 deletions docs/src/tutorials/ode_parameter_estimation.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ u0 = [5.0, 5.0]
prob = ODEProblem(lv, u0, tspan, [1.0, 1.0, 1.0, 1.0])
```

As we want to estimate the parameters as well, lets get some data.
As we want to estimate the parameters as well, let's get some data.

```@example param_estim_lv
true_p = [1.5, 1.0, 3.0, 1.0]
Expand All @@ -36,7 +36,7 @@ t_ = sol_data.t
u_ = reduce(hcat, sol_data.u)
```

Now, lets define a neural network for the PINN using [Lux.jl](https://lux.csail.mit.edu/).
Now, let's define a neural network for the PINN using [Lux.jl](https://lux.csail.mit.edu/).

```@example param_estim_lv
rng = Random.default_rng()
Expand Down Expand Up @@ -81,7 +81,7 @@ plot(sol, labels = ["u1_pinn" "u2_pinn"])
plot!(sol_data, labels = ["u1_data" "u2_data"])
```

We can see it is a good fit! Now lets see if we have the parameters of the equation also estimated correctly or not.
We can see it is a good fit! Now let's see if we have the parameters of the equation also estimated correctly or not.

```@example param_estim_lv
sol.k.u.p
Expand Down
2 changes: 1 addition & 1 deletion src/training_strategies.jl
Original file line number Diff line number Diff line change
Expand Up @@ -211,7 +211,7 @@ that accelerate the convergence in high dimensional spaces over pure random sequ
* `sampling_alg`: the quasi-Monte Carlo sampling algorithm,
* `resampling`: if it's false - the full training set is generated in advance before training,
and at each iteration, one subset is randomly selected out of the batch.
Ff it's true - the training set isn't generated beforehand, and one set of quasi-random
If it's true - the training set isn't generated beforehand, and one set of quasi-random
points is generated directly at each iteration in runtime. In this case, `minibatch` has no effect,
* `minibatch`: the number of subsets, if resampling == false.
Expand Down

0 comments on commit 3bb93dd

Please sign in to comment.