Skip to content

Commit

Permalink
Merge pull request #849 from lanceXwq/master-lanceXwq
Browse files Browse the repository at this point in the history
Docstring typo fixes
  • Loading branch information
sathvikbhagavan authored Apr 15, 2024
2 parents 0a83586 + f00dc56 commit 110addc
Show file tree
Hide file tree
Showing 6 changed files with 8 additions and 8 deletions.
2 changes: 1 addition & 1 deletion docs/src/examples/complex.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ alg = NNODE(chain, opt, ps; strategy = StochasticTraining(500))
sol = solve(problem, alg, verbose = false, maxiters = 5000, saveat = 0.01)
```

Now, lets plot the predictions.
Now, let's plot the predictions.

`u1`:

Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/Lotka_Volterra_BPINNs.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ plot!(time, y, label = "noisy y")
plot!(solution, labels = ["x" "y"])
```

Lets define a PINN.
Let's define a PINN.

```@example bpinn
# Neural Networks must have 2 outputs as u -> [dx,dy] in function lotka_volterra()
Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/dae.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ alg = NNDAE(chain, opt; autodiff = false)
sol = solve(prob, alg, verbose = true, dt = 1 / 100.0, maxiters = 3000, abstol = 1e-10)
```

Now lets compare the predictions from the learned network with the ground truth which we can obtain by numerically solving the DAE.
Now let's compare the predictions from the learned network with the ground truth which we can obtain by numerically solving the DAE.

```@example dae
function example1(du, u, p, t)
Expand Down
2 changes: 1 addition & 1 deletion docs/src/tutorials/ode.md
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ Once these pieces are together, we call `solve` just like with any other `ODEPro
sol = solve(prob, alg, verbose = true, maxiters = 2000, saveat = 0.01)
```

Now lets compare the predictions from the learned network with the ground truth which we can obtain by numerically solving the ODE.
Now let's compare the predictions from the learned network with the ground truth which we can obtain by numerically solving the ODE.

```@example nnode1
using OrdinaryDiffEq, Plots
Expand Down
6 changes: 3 additions & 3 deletions docs/src/tutorials/ode_parameter_estimation.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ u0 = [5.0, 5.0]
prob = ODEProblem(lv, u0, tspan, [1.0, 1.0, 1.0, 1.0])
```

As we want to estimate the parameters as well, lets get some data.
As we want to estimate the parameters as well, let's get some data.

```@example param_estim_lv
true_p = [1.5, 1.0, 3.0, 1.0]
Expand All @@ -36,7 +36,7 @@ t_ = sol_data.t
u_ = reduce(hcat, sol_data.u)
```

Now, lets define a neural network for the PINN using [Lux.jl](https://lux.csail.mit.edu/).
Now, let's define a neural network for the PINN using [Lux.jl](https://lux.csail.mit.edu/).

```@example param_estim_lv
rng = Random.default_rng()
Expand Down Expand Up @@ -81,7 +81,7 @@ plot(sol, labels = ["u1_pinn" "u2_pinn"])
plot!(sol_data, labels = ["u1_data" "u2_data"])
```

We can see it is a good fit! Now lets see if we have the parameters of the equation also estimated correctly or not.
We can see it is a good fit! Now let's see if we have the parameters of the equation also estimated correctly or not.

```@example param_estim_lv
sol.k.u.p
Expand Down
2 changes: 1 addition & 1 deletion src/training_strategies.jl
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ that accelerate the convergence in high dimensional spaces over pure random sequ
* `sampling_alg`: the quasi-Monte Carlo sampling algorithm,
* `resampling`: if it's false - the full training set is generated in advance before training,
and at each iteration, one subset is randomly selected out of the batch.
Ff it's true - the training set isn't generated beforehand, and one set of quasi-random
If it's true - the training set isn't generated beforehand, and one set of quasi-random
points is generated directly at each iteration in runtime. In this case, `minibatch` has no effect,
* `minibatch`: the number of subsets, if resampling == false.
Expand Down

0 comments on commit 110addc

Please sign in to comment.