Skip to content

Commit

Permalink
Merge pull request #41 from JT-39/potential_typo_correction
Browse files Browse the repository at this point in the history
Changed potential typo
  • Loading branch information
b-rodrigues authored Oct 2, 2023
2 parents ee2e623 + 023f787 commit e9ac3d7
Show file tree
Hide file tree
Showing 6 changed files with 20 additions and 20 deletions.
5 changes: 2 additions & 3 deletions fprog.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -484,10 +484,10 @@ sqrt(-5)
This only raises a warning and returns `NaN` (Not a Number). This can be quite
dangerous, especially when working non-interactively, which is what we will be
doing a lot later on. It is much better if a pipeline fails early due to an
error, than dragging a `NaN` value. This also happens with `sqrt()`:
error, than dragging a `NaN` value. This also happens with `log10()`:

```{r}
sqrt(-10)
log10(-10)
```

So it could be useful to redefine these functions to raise an error instead, for
Expand Down Expand Up @@ -705,7 +705,6 @@ fact_iter <- function(n){
result = 1
for(i in 1:n){
result = result * i
i = i + 1
}
result
}
Expand Down
4 changes: 2 additions & 2 deletions lit_prog.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -753,7 +753,7 @@ create this. This is this function:
```{r}
return_section <- function(dataset, var){
a <- knitr::knit_expand(text = c(
## Frequency table for variable: {{variable}}",
"## Frequency table for variable: {{variable}}",
create_table(dataset, var)),
variable = var)
cat(a, sep = "\n")
Expand Down Expand Up @@ -984,7 +984,7 @@ that I recommend tick the following two important boxes:
- Work the same way regardless of output format (Word, PDF or Html);
- Work for any type of table: summary tables, regression tables, two-way tables, etc.

Let's start with the simplest type of table, which would is a table that simply
Let's start with the simplest type of table, which would be a table that simply
shows some rows of data. `{knitr}` comes with the `kable()` function, but this
function generates a very plain looking output. For something
publication-worthy, we recommend the `{flextable}` package, developed by
Expand Down
7 changes: 4 additions & 3 deletions packages.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -691,6 +691,7 @@ is that I’ve added examples:
````{verbatim}
```{r examples-get_laspeyeres, eval = FALSE}
#' \dontrun{
#' country_level_data_laspeyeres <- get_laspeyeres_index(country_level_data)
#' commune_level_data_laspeyeres <- get_laspeyeres(commune_level_data)
#' }
```
Expand Down Expand Up @@ -730,7 +731,7 @@ Something important to notice as well: my fusen-ready `.Rmd` file is simply
called `save_data.Rmd`, while the generated, inflated file, that will be part of
the package under the `vignettes/` folder is called `dev-save_data.Rmd`.

When you inflate you a flat file into a package, the R console will be verbose.
When you inflate a flat file into a package, the R console will be verbose.
This lists all files that are created or modified, but there is also a long list
of checks that run automatically. This is the output of `devtools::check()` that
is included inside `fusen::inflate()`. This function verifies that your package,
Expand Down Expand Up @@ -848,7 +849,7 @@ It is also possible to install the package from a specific branch:

```{r, eval = F}
remotes::install_github(
"github_username/repository_name@repo_name"
"github_username/repository_name@branch_name"
)
```

Expand All @@ -857,7 +858,7 @@ commit:

```{r, eval = F}
remotes::install_github(
"github_username/repository_name@repo_name",
"github_username/repository_name@branch_name",
ref = "commit_hash"
)
```
Expand Down
16 changes: 8 additions & 8 deletions repro_cont.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ architecture with their Apple silicon CPUs (as of writing, the Mac Pro is the
only computer manufactured by Apple that doesn't use an Apple silicon CPU and
only because it was released in 2019) and it wouldn't surprise me if other
manufacturers follow suit and develop their own ARM cpus. This means that
projects written today may not run anymore in the future, because of this
projects written today may not run anymore in the future, because of these
architecture changes. Libraries compiled for current architectures would need to
be recompiled for ARM, and that may be difficult.

Expand Down Expand Up @@ -534,7 +534,7 @@ Google search (but I'm giving it to you, dear reader, for free).

Then come `RUN` statements. The first one uses Ubuntu's package manager to first
refresh the repositories (this ensures that our local Ubuntu installation
repositories are in synch with the latest software updates that were pushed to
repositories are in sync with the latest software updates that were pushed to
the central Ubuntu repos). Then we use Ubuntu's package manager to install
`r-base`. `r-base` is the package that installs R. We then finish this
Dockerfile by running `CMD ["R"]`. This is the command that will be executed
Expand Down Expand Up @@ -587,7 +587,7 @@ What is going on here? When you run a container, the command specified by `CMD`
gets executed, and then the container quits. So here, the container ran the
command `R`, which started the R interpreter, but then quit immediately. When
quitting R, users should specify if they want to save or not save the workspace.
This is what the message above is telling us. So, how can be use this? Is there
This is what the message above is telling us. So, how can we use this? Is there
a way to use this R version interactively?

Yes, there is a way to use this R version boxed inside our Docker image
Expand Down Expand Up @@ -694,7 +694,7 @@ as a file. I’ll explain how later.
The Rocker project offers many different images, which are described
[here](https://rocker-project.org/images/)^[https://rocker-project.org/images/].
We are going to be using the *versioned* images. These are images that ship
specific versions of R. This way, it doesn't matter when the image gets build,
specific versions of R. This way, it doesn't matter when the image gets built,
the same version of R will be installed by getting built from source. Let me
explain why building R from source is important. When we build the image from
the Dockerfile we wrote before, R gets installed from the Ubuntu repositories.
Expand Down Expand Up @@ -882,7 +882,7 @@ and final step:

This runs the `R` program from the Linux command line with the option `-e`. This
option allows you to pass an `R` expression to the command line, which needs to
be written between `""`. Using `R -e` will quickly become an habit, because this
be written between `""`. Using `R -e` will quickly become a habit, because this
is how you can run R non-interactively, from the command line. The expression we
pass sets the working directory to `/home/housing`, and then we use
`renv::init()` and `renv::restore()` to restore the packages from the
Expand Down Expand Up @@ -1086,7 +1086,7 @@ the R session in the right directory. So we move to the right directory, then we
run the pipeline using `R -e "targets::tar_make()"`. Notice that we do both
operations within a `RUN` statement. This means that the pipeline will run at
build-time (remember, `RUN` statements run at build-time, `CMD` statements at
run-time). In order words, the image will contain the outputs. This way, if the
run-time). In other words, the image will contain the outputs. This way, if the
build process and the pipeline take a long time to run, you can simply leave
them running overnight for example. In the morning, while sipping on your
coffee, you can then simply run the container to instantly get the outputs. This
Expand Down Expand Up @@ -1320,7 +1320,7 @@ By following these two rules, you should keep any issues to a minimum. When or
if you need to update R and/or the package library on your machine, simply
create a new Docker image that reflects these changes.

However, if work in a field where operating system versions matter, then yes,
However, if you work in a field where operating system versions matter, then yes,
you should find a way to either use the dockerized environment for development,
or you should install Ubuntu on your computer (the same version as in Docker of
course).
Expand Down Expand Up @@ -1636,7 +1636,7 @@ needs mitigation, and thus a plan B. This plan B could be to host the images
yourself, by saving them using `docker save`. Or you could even self-host an
image registry (or lobby your employer/institution/etc to host a registry for
its developers/data scientists/researchers). In any case, it's good to have
options and now what potential risks using this technology entail.
options and know what potential risks using this technology entail.

### Is Docker enough?

Expand Down
4 changes: 2 additions & 2 deletions repro_intro.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@ can find in the `renv` folder. Let’s take a look at the contents of this folde

::: {.content-hidden when-format="pdf"}
```bash
owner@localhost ➤ ls renv
owner@localhost ➤ ls -la renv
```
:::

Expand Down Expand Up @@ -575,7 +575,7 @@ The first problem, and I’m repeating myself here, is that `{renv}` only record
the R version used for the project, but does not restore it when calling
`renv::restore()`. You need to install the right R version yourself. On Windows
this should be fairly easy to do, but then you need to start juggling R versions
and know which scrips need which R version, which can get confusing.
and know which scripts need which R version, which can get confusing.

There is the `{rig}` package that makes it easy to install and switch between R
versions that you could check
Expand Down
4 changes: 2 additions & 2 deletions targets.qmd
Original file line number Diff line number Diff line change
Expand Up @@ -926,7 +926,7 @@ This pipeline loads the `.csv` file from before and creates a summary of the
data as well as plot. But we don’t simply want these objects to be saved as
`.rds` files by the pipeline, we want to be able to use them to write a document
(either in the `.Rmd` or `.Qmd` format). For this, we need another package,
called `{tarchetypes}`. This package comes many functions that allow you to
called `{tarchetypes}`. This package comes with many functions that allow you to
define new types of targets (these functions are called *target factories* in
`{targets}` jargon). The new target factory that we need is
`tarchetypes::tar_render()`. As you can probably guess from the name, this
Expand Down Expand Up @@ -1373,7 +1373,7 @@ default, `data()` loads the data in the global environment. But remember, we
want our function to be pure, meaning, it should only return the data object and
not load anything into the global environment! So that’s where the temporary
environment created in the first line of the body of the function comes into
play. What happens is that the functions loads the data object into this
play. What happens is that the function loads the data object into this
temporary environment, which is different from the global environment. Once
we’re done, we can simply discard this environment, and so our global
environment stays clean.
Expand Down

0 comments on commit e9ac3d7

Please sign in to comment.