diff --git a/docs/src/examples/complex.md b/docs/src/examples/complex.md index a11a4d4d0f..ff9f1339a5 100644 --- a/docs/src/examples/complex.md +++ b/docs/src/examples/complex.md @@ -44,7 +44,7 @@ alg = NNODE(chain, opt, ps; strategy = StochasticTraining(500)) sol = solve(problem, alg, verbose = false, maxiters = 5000, saveat = 0.01) ``` -Now, lets plot the predictions. +Now, let's plot the predictions. `u1`: diff --git a/docs/src/tutorials/Lotka_Volterra_BPINNs.md b/docs/src/tutorials/Lotka_Volterra_BPINNs.md index b21d6685e5..a8a2bb0eb3 100644 --- a/docs/src/tutorials/Lotka_Volterra_BPINNs.md +++ b/docs/src/tutorials/Lotka_Volterra_BPINNs.md @@ -66,7 +66,7 @@ plot!(time, y, label = "noisy y") plot!(solution, labels = ["x" "y"]) ``` -Lets define a PINN. +Let's define a PINN. ```@example bpinn # Neural Networks must have 2 outputs as u -> [dx,dy] in function lotka_volterra() diff --git a/docs/src/tutorials/dae.md b/docs/src/tutorials/dae.md index a40d9c58d8..1f468caedd 100644 --- a/docs/src/tutorials/dae.md +++ b/docs/src/tutorials/dae.md @@ -37,7 +37,7 @@ alg = NNDAE(chain, opt; autodiff = false) sol = solve(prob, alg, verbose = true, dt = 1 / 100.0, maxiters = 3000, abstol = 1e-10) ``` -Now lets compare the predictions from the learned network with the ground truth which we can obtain by numerically solving the DAE. +Now let's compare the predictions from the learned network with the ground truth which we can obtain by numerically solving the DAE. ```@example dae function example1(du, u, p, t) diff --git a/docs/src/tutorials/ode.md b/docs/src/tutorials/ode.md index 5b1afdfd2a..c78341c79e 100644 --- a/docs/src/tutorials/ode.md +++ b/docs/src/tutorials/ode.md @@ -61,7 +61,7 @@ Once these pieces are together, we call `solve` just like with any other `ODEPro sol = solve(prob, alg, verbose = true, maxiters = 2000, saveat = 0.01) ``` -Now lets compare the predictions from the learned network with the ground truth which we can obtain by numerically solving the ODE. +Now let's compare the predictions from the learned network with the ground truth which we can obtain by numerically solving the ODE. ```@example nnode1 using OrdinaryDiffEq, Plots diff --git a/docs/src/tutorials/ode_parameter_estimation.md b/docs/src/tutorials/ode_parameter_estimation.md index 93b1c99eca..c8243e9587 100644 --- a/docs/src/tutorials/ode_parameter_estimation.md +++ b/docs/src/tutorials/ode_parameter_estimation.md @@ -26,7 +26,7 @@ u0 = [5.0, 5.0] prob = ODEProblem(lv, u0, tspan, [1.0, 1.0, 1.0, 1.0]) ``` -As we want to estimate the parameters as well, lets get some data. +As we want to estimate the parameters as well, let's get some data. ```@example param_estim_lv true_p = [1.5, 1.0, 3.0, 1.0] @@ -36,7 +36,7 @@ t_ = sol_data.t u_ = reduce(hcat, sol_data.u) ``` -Now, lets define a neural network for the PINN using [Lux.jl](https://lux.csail.mit.edu/). +Now, let's define a neural network for the PINN using [Lux.jl](https://lux.csail.mit.edu/). ```@example param_estim_lv rng = Random.default_rng() @@ -81,7 +81,7 @@ plot(sol, labels = ["u1_pinn" "u2_pinn"]) plot!(sol_data, labels = ["u1_data" "u2_data"]) ``` -We can see it is a good fit! Now lets see if we have the parameters of the equation also estimated correctly or not. +We can see it is a good fit! Now let's see if we have the parameters of the equation also estimated correctly or not. ```@example param_estim_lv sol.k.u.p diff --git a/src/training_strategies.jl b/src/training_strategies.jl index 68aa9cd71f..858e93a237 100644 --- a/src/training_strategies.jl +++ b/src/training_strategies.jl @@ -160,7 +160,7 @@ that accelerate the convergence in high dimensional spaces over pure random sequ * `sampling_alg`: the quasi-Monte Carlo sampling algorithm, * `resampling`: if it's false - the full training set is generated in advance before training, and at each iteration, one subset is randomly selected out of the batch. - Ff it's true - the training set isn't generated beforehand, and one set of quasi-random + If it's true - the training set isn't generated beforehand, and one set of quasi-random points is generated directly at each iteration in runtime. In this case, `minibatch` has no effect, * `minibatch`: the number of subsets, if resampling == false.