In this book, we use tidyverse packages, primarily because they offer readability that is beneficial for beginners. This readability allows us to emphasize data analysis and statistical concepts. However, while tidyverse is beginner-friendly, there are other methods in R that are more efficient and can handle larger datasets more effectively. One such package is data.table, which is widely used in the R community. We’ll briefly introduce data.table in this chapter. For those interested in diving deeper, there are numerous online resources, including the mentioned introduction1.
+
+5.1 Refining data tables
+
data.table is a separate package that needs to be installed. Once installed, we then need to load it along with the other packages we will use:
We will provide example code showing the data.table approaches to dplyr’s mutate, filter, select, group_by, and summarize shown in Chapter Chapter 4. As in that chapter, we will use the murders dataset:
+
The first step when using data.table is to convert the data frame into a data.table object using the as.data.table function:
We can also use the .()data.table notation to alert R that variables inside the parenthesis are column names, not objects in the R environment. So the above can also be written like this:
+
+
murders_dt[, .(state, region)]
+
+
+5.1.2 Adding or transformin variables
+
We learned to use the dplyrmutate function with this example:
data.table uses an approach that avoids a new assignment (update by reference). This can help with large datasets that take up most of your computer’s memory. The data.table :=` function permits us to do this:
+
+
murders_dt[, rate:=total/population*100000]
+
+
This adds a new column, rate, to the table. Notice that, as in dplyr, we used total and population without quotes.
+
To define new multiple columns, we can use the := function with multiple arguments:
y is actually referencing x, it is not an new opject: y just another name for x. Until you change y, a new object will not be made. However, the := function changes by reference so if you change x, a new object is not made and y continues to be just another name for x:
+
+
x[,a:=2]
+y
+#> a
+#> 1: 2
+
+
You can also change x like this:
+
+
y[,a:=1]
+x
+#> a
+#> 1: 1
+
+
To avoid this, you can use the copy function which forces the creation of an actual copy:
+
+
x<-data.table(a =1)
+y<-copy(x)
+x[,a:=2]
+y
+#> a
+#> 1: 1
+
+
Note that the function as.data.table creates a copy of the data frame being converted. However, if working with a large data frames it is helpful to avoid this by using setDT:
With data.table, we again use an approach similar to subsetting matrices, except data.table knows that rate refers to a column name and not an object in the R environment:
+
+
murders_dt[rate<=0.7]
+
+
Notice that we can combine the filter and select into one succint command. Here are the state names and rates for those with rates below 0.7.
+
+
murders_dt[rate<=0.7, .(state, rate)]
+#> state rate
+#> 1: Hawaii 0.515
+#> 2: Iowa 0.689
+#> 3: New Hampshire 0.380
+#> 4: North Dakota 0.595
+#> 5: Vermont 0.320
In data.table we place a function call within .() to obtain the three number summary:
+
+
heights_dt[, .(median_min_max(height))]
+
+
+5.2.2 Group then summarize
+
The group_by followed by summarize in dplyr is performed in one line in data.table. We simply add the by argument to split the data into groups based on the values in categorical variable:
+
+
heights_dt[, .(avg =mean(height), sd =sd(height)), by =sex]
+#> sex avg sd
+#> 1: Male 69.3 3.61
+#> 2: Female 64.9 3.76
+
+
+5.3 Sorting
+
We can order rows using the same approach we use for filter. Here are the states ordered by murder rate:
Add a murders column named rate with the per 100,000 murder rate as in the example code above.
+
2. Add a column rank containing the rank, from highest to lowest murder rate.
+
3. If we want to only show the states and population sizes, we can use:
+
+
murders_dt[, .(state, population)]
+
+
Show the state names and abbreviations in murders.
+
4. You can show just the New York row like this:
+
+
murders_dt[state=="New York"]
+
+
You can use other logical vectors to filter rows.
+
Show the top 5 states with the highest murder rates. After we add murder rate and rank, do not change the murders dataset, just show the result. Remember that you can filter based on the rank column.
+
5. We can remove rows using the != operator. For example, to remove Florida, we would do this:
+
+
no_florida<-murders_dt[state!="Florida"]
+
+
Create a new data frame called no_south that removes states from the South region. How many states are in this category? You can use the function nrow for this.
+
6. We can also use %in% to filter. You can therefore see the data from New York and Texas as follows:
Create a new data frame called murders_nw with only the states from the Northeast and the West. How many states are in this category?
+
7. Suppose you want to live in the Northeast or West and want the murder rate to be less than 1. We want to see the data for the states satisfying these options. Note that you can use logical operators with filter. Here is an example in which we filter to keep only small states in the Northeast region.
Make sure murders has been defined with rate and rank and still has all states. Create a table called my_states that contains rows for states satisfying both the conditions: they are in the Northeast or West and the murder rate is less than 1. Show only the state name, the rate, and the rank.
+
For exercises 8-12, we will be using the NHANES data.
8. We will provide some basic facts about blood pressure. First let’s select a group to set the standard. We will use 20-to-29-year-old females. AgeDecade is a categorical variable with these ages. Note that the category is coded like ” 20-29”, with a space in front! Use the data.table package to compute the average and standard deviation of systolic blood pressure as saved in the BPSysAve variable. Save it to a variable called ref.
+
9. Report the min and max values for the same group.
+
10. Compute the average and standard deviation for females, but for each age group separately rather than a selected decade as in question 1. Note that the age groups are defined by AgeDecade.
+
11. Repeat exercise 3 for males.
+
12. For males between the ages of 40-49, compare systolic blood pressure across race as reported in the Race1 variable. Order the resulting table from lowest to highest average systolic blood pressure.
In this chapter, we will demonstrate how relatively simple ggplot2 code can create insightful and aesthetically pleasing plots. As motivation we will create plots that help us better understand trends in world health and economics. We will implement what we learned in Chapters Chapter 8Chapter 9 and learn how to augment the code to perfect the plots. As we go through our case study, we will describe relevant general data visualization principles and learn concepts such as faceting, time series plots, transformations, and ridge plots.
+
+10.1 Case study 1: new insights on poverty
+
Hans Rosling1 was the co-founder of the Gapminder Foundation2, an organization dedicated to educating the public by using data to dispel common myths about the so-called developing world. The organization uses data to show how actual trends in health and economics contradict the narratives that emanate from sensationalist media coverage of catastrophes, tragedies, and other unfortunate events. As stated in the Gapminder Foundation’s website:
+
+
+
+
Journalists and lobbyists tell dramatic stories. That’s their job. They tell stories about extraordinary events and unusual people. The piles of dramatic stories pile up in peoples’ minds into an over-dramatic worldview and strong negative stress feelings: “The world is getting worse!”, “It’s we vs. them!”, “Other people are strange!”, “The population just keeps growing!” and “Nobody cares!”
+
+
+
+
Hans Rosling conveyed actual data-based trends in a dramatic way of his own, using effective data visualization. This section is based on two talks that exemplify this approach to education: [New Insights on Poverty]3 and The Best Stats You’ve Ever Seen4. Specifically, in this section, we use data to attempt to answer the following two questions:
+
+
Is it a fair characterization of today’s world to say it is divided into western rich nations and the developing world in Africa, Asia, and Latin America?
+
Has income inequality across countries worsened during the last 40 years?
+
+
To answer these questions, we will be using the gapminder dataset provided in dslabs. This dataset was created using a number of spreadsheets available from the Gapminder Foundation. You can access the table like this:
+
+
library(tidyverse)
+library(dslabs)
+gapminder|>as_tibble()
+#> # A tibble: 10,545 × 9
+#> country year infant_mortality life_expectancy fertility population
+#> <fct> <int> <dbl> <dbl> <dbl> <dbl>
+#> 1 Albania 1960 115. 62.9 6.19 1636054
+#> 2 Algeria 1960 148. 47.5 7.65 11124892
+#> 3 Angola 1960 208 36.0 7.32 5270844
+#> 4 Antigua a… 1960 NA 63.0 4.43 54681
+#> 5 Argentina 1960 59.9 65.4 3.11 20619075
+#> # ℹ 10,540 more rows
+#> # ℹ 3 more variables: gdp <dbl>, continent <fct>, region <fct>
+
+
+10.1.1 Hans Rosling’s quiz
+
As done in the New Insights on Poverty video, we start by testing our knowledge regarding differences in child mortality across different countries. For each of the six pairs of countries below, which country do you think had the highest child mortality rates in 2015? Which pairs do you think are most similar?
+
+
Sri Lanka or Turkey
+
Poland or South Korea
+
Malaysia or Russia
+
Pakistan or Vietnam
+
Thailand or South Africa
+
+
When answering these questions without data, the non-European countries are typically picked as having higher child mortality rates: Sri Lanka over Turkey, South Korea over Poland, and Malaysia over Russia. It is also common to assume that countries considered to be part of the developing world: Pakistan, Vietnam, Thailand, and South Africa, have similarly high mortality rates.
+
To answer these questions with data, we can use dplyr. For example, for the first comparison we see that:
+
+
gapminder|>
+filter(year==2015&country%in%c("Sri Lanka","Turkey"))|>
+select(country, infant_mortality)
+#> country infant_mortality
+#> 1 Sri Lanka 8.4
+#> 2 Turkey 11.6
+
+
Turkey has the higher infant mortality rate.
+
We can use this code on all comparisons and find the following:
+
+
+
+
+
country
+
infant_mortality
+
country
+
infant_mortality
+
+
+
+
Sri Lanka
+
8.4
+
Turkey
+
11.6
+
+
+
Poland
+
4.5
+
South Korea
+
2.9
+
+
+
Malaysia
+
6.0
+
Russia
+
8.2
+
+
+
Pakistan
+
65.8
+
Vietnam
+
17.3
+
+
+
Thailand
+
10.5
+
South Africa
+
33.6
+
+
+
+
+
+
We see that the European countries on this list have higher child mortality rates: Poland has a higher rate than South Korea, and Russia has a higher rate than Malaysia. We also see that Pakistan has a much higher rate than Vietnam, and South Africa has a much higher rate than Thailand. It turns out that when Hans Rosling gave this quiz to educated groups of people, the average score was less than 2.5 out of 5, worse than what they would have obtained had they guessed randomly. This implies that more than ignorant, we are misinformed. In this chapter we see how data visualization helps inform us.
+
+10.2 Scatterplots
+
The reason for the misconception described in the previous section stems from the preconceived notion that the world is divided into two groups: the western world (Western Europe and North America), characterized by long life spans and small families, versus the developing world (Africa, Asia, and Latin America) characterized by short life spans and large families. But do the data support this dichotomous view?
+
The necessary data to answer this question is also available in our gapminder table. Using our newly learned data visualization skills, we will be able to tackle this challenge.
+
In order to analyze this world view, our first plot is a scatterplot of life expectancy versus fertility rates (average number of children per woman). We start by looking at data from about 50 years ago, when perhaps this view was first cemented in our minds.
Life expectancy around 70 years and 3 or fewer children per family.
+
Life expectancy lower than 65 years and more than 5 children per family.
+
+
To confirm that indeed these countries are from the regions we expect, we can use color to represent continent.
+
+
filter(gapminder, year==1962)|>
+ggplot(aes(fertility, life_expectancy, color =continent))+
+geom_point()
+
+
+
+
+
+
+
In 1962, “the West versus developing world” view was grounded in some reality. Is this still the case 50 years later?
+
+10.3 Faceting
+
We could easily plot the 2012 data in the same way we did for 1962. To make comparisons, however, side by side plots are preferable. In ggplot2, we can achieve this by faceting variables: we stratify the data by some variable and make the same plot for each strata.
+
To achieve faceting, we add a layer with the function facet_grid, which automatically separates the plots. This function lets you facet by up to two variables using columns to represent one variable and rows to represent the other. The function expects the row and column variables to be separated by a ~. Here is an example of a scatterplot with facet_grid added as the last layer:
We see a plot for each continent/year pair. However, this is just an example and more than what we want, which is simply to compare 1962 and 2012. In this case, there is just one variable and we use . to let facet know that we are not using one of the variables:
This plot clearly shows that the majority of countries have moved from the developing world cluster to the western world one. In 2012, the western versus developing world view no longer makes sense. This is particularly clear when comparing Europe to Asia, the latter of which includes several countries that have made great improvements.
+
+10.3.1facet_wrap
+
+
To explore how this transformation happened through the years, we can make the plot for several years. For example, we can add 1970, 1980, 1990, and 2000. If we do this, we will not want all the plots on the same row, the default behavior of facet_grid, since they will become too thin to show the data. Instead, we will want to use multiple rows and columns. The function facet_wrap permits us to do this by automatically wrapping the series of plots so that each display has viewable dimensions:
This plot clearly shows how most Asian countries have improved at a much faster rate than European ones.
+
+10.3.2 Fixed scales for better comparisons
+
The default choice of the range of the axes is important. When not using facet, this range is determined by the data shown in the plot. When using facet, this range is determined by the data shown in all plots and therefore kept fixed across plots. This makes comparisons across plots much easier. For example, in the above plot, we can see that life expectancy has increased and the fertility has decreased across most countries. We see this because the cloud of points moves. This is not the case if we adjust the scales:
In the plot above, we have to pay special attention to the range to notice that the plot on the right has a larger life expectancy.
+
+10.4 Time series plots
+
The visualizations above effectively illustrate that data no longer supports the western versus developing world view. Once we see these plots, new questions emerge. For example, which countries are improving more and which ones less? Was the improvement constant during the last 50 years or was it more accelerated during certain periods? For a closer look that may help answer these questions, we introduce time series plots.
+
Time series plots have time in the x-axis and an outcome or measurement of interest on the y-axis. For example, here is a trend plot of United States fertility rates:
We see that the trend is not linear at all. Instead there is sharp drop during the 1960s and 1970s to below 2. Then the trend comes back to 2 and stabilizes during the 1990s.
+
When the points are regularly and densely spaced, as they are here, we create curves by joining the points with lines, to convey that these data are from a single series, here a country. To do this, we use the geom_line function instead of geom_point.
This is particularly helpful when we look at two countries. If we subset the data to include two countries, one from Europe and one from Asia, then adapt the code above:
Unfortunately, this is not the plot that we want. Rather than a line for each country, the points for both countries are joined. This is actually expected since we have not told ggplot anything about wanting two separate lines. To let ggplot know that there are two curves that need to be made separately, we assign each point to a group, one for each country:
+
+
countries<-c("South Korea","Germany")
+
+gapminder|>filter(country%in%countries&!is.na(fertility))|>
+ggplot(aes(year, fertility, group =country))+
+geom_line()
+
+
+
+
+
+
+
But which line goes with which country? We can assign colors to make this distinction. A useful side-effect of using the color argument to assign different colors to the different countries is that the data is automatically grouped:
+
+
countries<-c("South Korea","Germany")
+gapminder|>filter(country%in%countries&!is.na(fertility))|>
+ggplot(aes(year,fertility, col =country))+
+geom_line()
+
+
+
+
+
+
+
The plot clearly shows how South Korea’s fertility rate dropped drastically during the 1960s and 1970s, and by 1990 had a similar rate to that of Germany.
+
+10.4.1 Labels instead of legends
+
For trend plots we recommend labeling the lines rather than using legends since the viewer can quickly see which line is which country. This suggestion actually applies to most plots: labeling is usually preferred over legends.
+
We demonstrate how we can do this using the geomtextpath package. We define a data table with the label locations and then use a second mapping just for these labels:
The plot clearly shows how an improvement in life expectancy followed the drops in fertility rates. In 1960, Germans lived 15 years longer than South Koreans, although by 2010 the gap is completely closed. It exemplifies the improvement that many non-western countries have achieved in the last 40 years.
+
+10.5 Data transformations
+
We now shift our attention to the second question related to the commonly held notion that wealth distribution across the world has become worse during the last decades. When general audiences are asked if poor countries have become poorer and rich countries become richer, the majority answers yes. By using stratification, histograms, smooth densities, and boxplots, we will be able to understand if this is in fact the case. First we learn how transformations can sometimes help provide more informative summaries and plots.
+
The gapminder data table includes a column with the countries’ gross domestic product (GDP). GDP measures the market value of goods and services produced by a country in a year. The GDP per person is often used as a rough summary of a country’s wealth. Here we divide this quantity by 365 to obtain the more interpretable measure dollars per day. Using current US dollars as a unit, a person surviving on an income of less than $2 a day is defined to be living in absolute poverty. We add this variable to the data table:
The GDP values are adjusted for inflation and represent current US dollars, so these values are meant to be comparable across the years. Of course, these are country averages and within each country there is much variability. All the graphs and insights described below relate to country averages and not to individuals.
+
+10.5.1 Log transformation
+
Here is a histogram of per day incomes from 1970:
+
+
past_year<-1970
+gapminder|>
+filter(year==past_year&!is.na(gdp))|>
+ggplot(aes(dollars_per_day))+
+geom_histogram(binwidth =1, color ="black")
+
+
+
+
+
+
+
We use the color = "black" argument to draw a boundary and clearly distinguish the bins.
+
In this plot, we see that for the majority of countries, averages are below $10 a day. However, the majority of the x-axis is dedicated to the 35 countries with averages above $10. So the plot is not very informative about countries with values below $10 a day.
+
It might be more informative to quickly be able to see how many countries have average daily incomes of about $1 (extremely poor), $2 (very poor), $4 (poor), $8 (middle), $16 (well off), $32 (rich), $64 (very rich) per day. These changes are multiplicative and log transformations convert multiplicative changes into additive ones: when using base 2, a doubling of a value turns into an increase by 1.
+
Here is the distribution if we apply a log base 2 transform:
In a way, this provides a close-up of the mid to lower income countries.
+
+10.5.2 Which base?
+
In the case above, we used base 2 in the log transformations. Other common choices are base \(\mathrm{e}\) (the natural log) and base 10.
+
In general, we do not recommend using the natural log for data exploration and visualization. This is because while \(2^2, 2^3, 2^4, \dots\) or \(10^2, 10^3, \dots\) are easy to mentally compute, but the same is not true for \(\mathrm{e}^2, \mathrm{e}^3, \dots\). So natural log scale is not intuitive or easy to interpret.
+
In the dollars per day example, we used base 2 instead of base 10 because the resulting range is easier to interpret. The range of the values being plotted is 0.3269426, 48.8852142.
+
In base 10, this turns into a range that includes very few integers: just 0 and 1. With base two, our range includes -2, -1, 0, 1, 2, 3, 4, and 5. It is easier to compute \(2^x\) and \(10^x\) when \(x\) is an integer and between -10 and 10, so we prefer to have smaller integers in the scale. Another consequence of a limited range is that choosing the binwidth is more challenging. With log base 2, we know that a binwidth of 1 will translate to a bin with range \(x\) to \(2x\).
+
For an example in which base 10 makes more sense, consider population sizes. A log base 10 is preferable since the range for these is:
+
+
filter(gapminder, year==past_year)|>
+summarize(min =min(population), max =max(population))
+#> min max
+#> 1 46075 8.09e+08
In the above, we quickly see that country populations range between ten thousand and ten billion.
+
+10.5.3 Transform the values or the scale?
+
There are two ways we can use log transformations in plots. We can log the values before plotting them or use log scales in the axes. The plot will look the same, except for the numbers in the axes. Both approaches are useful and have different strengths. If we log the data, we can more easily interpret intermediate values in the scale. For example, if we see:
+
----1----x----2--------3----
+
for log transformed data, we know that the value of \(x\) is about 1.5. If the scales are logged:
+
----10---x---100------1000---
+
then, to determine x, we need to compute \(10^{1.5}\), which is not easy to do in our heads. The advantage of using logged scales is that we see the original values on the axes. However, the advantage of showing logged scales is that the original values are displayed in the plot, which are easier to interpret. For example, we would see “32 dollars a day” instead of “5 log base 2 dollars a day”.
+
As we learned earlier, if we want to scale the axis with logs, we can use the scale_x_continuous function. Instead of logging the values first, we apply this layer:
Note that the log base 10 transformation has its own function: scale_x_log10(), but currently base 2 does not, although we could easily define our own.
+
There are other transformations available through the trans argument. As we learn later on, the square root (sqrt) transformation is useful when considering counts. The logistic transformation (logit) is useful when plotting proportions between 0 and 1. The reverse transformation is useful when we want smaller values to be on the right or on top.
+
+10.6 Multimodal distributions
+
In the histogram above we see two bumps: one at about 4 and another at about 32. In statistics these bumps are sometimes referred to as modes. The mode of a distribution is the value with the highest frequency. The mode of the normal distribution is the average. When a distribution, like the one above, doesn’t monotonically decrease from the mode, we call the locations where it goes up and down again local modes and say that the distribution has multiple modes.
+
The histogram above suggests that the 1970 country income distribution has two modes: one at about 2 dollars per day (1 in the log 2 scale) and another at about 32 dollars per day (5 in the log 2 scale). This bimodality is consistent with a dichotomous world made up of countries with average incomes less than $8 (3 in the log 2 scale) a day and countries above that.
+
+10.7 Comparing distributions
+
A histogram showed us that the 1970 income distribution values show a dichotomy. However, the histogram does not show us if the two groups of countries are west versus the developing world.
+
Let’s start by quickly examining the data by region. We reorder the regions by the median value and use a log scale.
We can already see that there is indeed a “west versus the rest” dichotomy: we see two clear groups, with the rich group composed of North America, Northern and Western Europe, New Zealand and Australia. We define groups based on this observation:
In the next section we demonstrate how to visualize and compare distributions across groups.
+
+10.7.1 Boxplots
+
The exploratory data analysis above has revealed two characteristics about average income distribution in 1970. Using a histogram, we found a bimodal distribution with the modes relating to poor and rich countries. We now want to compare the distribution across these five groups to confirm the “west versus the rest” dichotomy. The number of points in each category is large enough that a summary plot may be useful. We could generate five histograms or five density plots, but it may be more practical to have all the visual summaries in one plot. We therefore start by stacking boxplots next to each other. Note that we add the layer theme(axis.text.x = element_text(angle = 90, hjust = 1)) to turn the group labels vertical, since they do not fit if we show them horizontally, and remove the axis label to make space.
Boxplots have the limitation that by summarizing the data into five numbers, we might miss important characteristics of the data. One way to avoid this is by showing the data.
Showing each individual point does not always reveal important characteristics of the distribution. Although not the case here, when the number of data points is so large that there is over-plotting, showing the data can be counterproductive. Boxplots help with this by providing a five-number summary, but this has limitations too. For example, boxplots will not permit us to discover bimodal distributions. To see this, note that the two plots below are summarizing the same dataset:
+
+
+
+
+
+
+
+
In cases in which we are concerned that the boxplot summary is too simplistic, we can show stacked smooth densities or histograms. We refer to these as ridge plots. Because we are used to visualizing densities with values in the x-axis, we stack them vertically. Also, because more space is needed in this approach, it is convenient to overlay them. The package ggridges provides a convenient function for doing this. Here is the income data shown above with boxplots but with a ridge plot.
Note that we have to invert the x and y used for the boxplot. A useful geom_density_ridges parameter is scale, which lets you determine the amount of overlap, with scale = 1 meaning no overlap and larger values resulting in more overlap.
+
If the number of data points is small enough, we can add them to the ridge plot using the following code:
By default, the height of the points is jittered and should not be interpreted in any way. To show data points, but without using jitter we can use the following code to add what is referred to as a rug representation of the data.
+10.7.3 Example: 1970 versus 2010 income distributions
+
Data exploration clearly shows that in 1970 there was a “west versus the rest” dichotomy. But does this dichotomy persist? Let’s use facet_grid and see how the distributions have changed. To start, we will focus on two groups: the west and the rest. We make four histograms. We make this plot only for countries with data in both 1970 and 2010. Note that several countries were founded after 1970, for example, the Soviet Union divided into several countries during the 1990s. We also note tat that data was available for more countries in 2010.
+
We therefore make the plot only for countries with data in both years:
We now see that the rich countries have become a bit richer, but percentage-wise, the poor countries appear to have improved more. In particular, we see that the proportion of developing countries earning more than $16 a day increased substantially.
+
To see which specific regions improved the most, we can remake the boxplots we made above, but now adding the year 2010 and then using facet to compare the two years.
Here, we pause to introduce another powerful ggplot2 feature. Because we want to compare each region before and after, it would be convenient to have the 1970 boxplot next to the 2010 boxplot for each region. In general, comparisons are easier when data are plotted next to each other.
+
So instead of faceting, we keep the data from each year together and ask to color (or fill) them depending on the year. Note that groups are automatically separated by year and each pair of boxplots drawn next to each other. Because year is a number, we turn it into a factor since ggplot2 automatically assigns a color to each category of a factor. Note that we have to convert the year columns from numeric to factor.
The previous data exploration suggested that the income gap between rich and poor countries has narrowed considerably during the last 40 years. We used a series of histograms and boxplots to see this. We suggest a succinct way to convey this message with just one plot.
+
Let’s start by noting that density plots for income distribution in 1970 and 2010 deliver the message that the gap is closing:
In the 1970 plot, we see two clear modes: poor and rich countries. In 2010, it appears that some of the poor countries have shifted towards the right, closing the gap.
+
The next message we need to convey is that the reason for this change in distribution is that several poor countries became richer, rather than some rich countries becoming poorer. To do this, we can assign a color to the groups we identified during data exploration.
+
However, because when we overlay two densities, the default is to have the area under the distribution curve add up to 1 for each group, regardless of the size of each group, we first need to learn how to make these smooth densities in a way that preserves information on the number of countries in each group. To do this, we will need to learn to access computed variables with geom_density function.
+
+10.7.4 Accessing computed variables
+
To have the areas of these densities be proportional to the size of the groups, we can simply multiply the y-axis values by the size of the group. From the geom_density help file, we see that the functions compute a variable called count that does exactly this. We want this variable to be on the y-axis rather than the density.
+
In ggplot2, we access these variables using the function after_stat. We will therefore use the following mapping:
If we want the densities to be smoother, we use the bw argument so that the same bandwidth is used in each density. We selected 0.75 after trying out several values.
This plot now shows what is happening very clearly. The developing world distribution is changing. A third mode appears consisting of the countries that most narrowed the gap.
+
To visualize if any of the groups defined above are driving this we can quickly make a ridge plot:
Here we can clearly see how the distributions for East Asia, Latin America, and others shift markedly to the right. While Sub-Saharan Africa remains stagnant.
+
Notice that we order the levels of the group so that the West’s density is plotted first, then Sub-Saharan Africa. Having the two extremes plotted first allows us to see the remaining bimodality better.
+
+10.7.5 Weighted densities
+
As a final point, we note that these distributions weigh every country the same. So if most of the population is improving, but living in a very large country, such as China, we might not appreciate this. We can actually weight the smooth densities using the weight mapping argument. The plot then looks like this:
+
+
+
+
+
+
+
+
This particular figure shows very clearly how the income distribution gap is closing with most of the poor remaining in Sub-Saharan Africa.
+
+10.8 Case study 2: the ecological fallacy
+
Throughout this section, we have been comparing regions of the world. We have seen that, on average, some regions do better than others. In this section, we focus on describing the importance of variability within the groups when examining the relationship between a country’s infant mortality rates and average income.
+
We define a few more regions and compare the averages across regions:
+
+
+
+
+
+
+
+
The relationship between these two variables is almost perfectly linear and the graph shows a dramatic difference. While in the West less than 0.5% of infants die, in Sub-Saharan Africa the rate is higher than 6%!
+
Note that the plot uses a new transformation, the logistic transformation.
+
+10.8.1 Logistic transformation
+
The logistic or logit transformation for a proportion or rate \(p\) is defined as:
+
\[f(p) = \log \left( \frac{p}{1-p} \right)\]
+
When \(p\) is a proportion or probability, the quantity that is being logged, \(p/(1-p)\), is called the odds. In this case \(p\) is the proportion of infants that survived. The odds tell us how many more infants are expected to survive than to die. The log transformation makes this symmetric. If the rates are the same, then the log odds is 0. Fold increases or decreases turn into positive and negative increments, respectively.
+
This scale is useful when we want to highlight differences near 0 or 1. For survival rates this is important because a survival rate of 90% is unacceptable, while a survival of 99% is relatively good. We would much prefer a survival rate closer to 99.9%. We want our scale to highlight these difference and the logit does this. Note that 99.9/0.1 is about 10 times bigger than 99/1 which is about 10 times larger than 90/10. By using the log, these fold changes turn into constant increases.
+
+10.8.2 Show the data
+
Now, back to our plot. Based on the plot above, do we conclude that a country with a low income is destined to have low survival rate? Do we conclude that survival rates in Sub-Saharan Africa are all lower than in Southern Asia, which in turn are lower than in the Pacific Islands, and so on?
+
Jumping to this conclusion based on a plot showing averages is referred to as the ecological fallacy. The almost perfect relationship between survival rates and income is only observed for the averages at the region level. Once we show all the data, we see a somewhat more complicated story:
+
+
+
+
+
+
+
+
Specifically, we see that there is a large amount of variability. We see that countries from the same regions can be quite different and that countries with the same income can have different survival rates. For example, while on average Sub-Saharan Africa had the worse health and economic outcomes, there is wide variability within that group. Mauritius and Botswana are doing better than Angola and Sierra Leone, with Mauritius comparable to Western countries.
+
+10.9 Case study 3: vaccines and infectious diseases
+
Vaccines have helped save millions of lives. In the 19th century, before herd immunization was achieved through vaccination programs, deaths from infectious diseases, such as smallpox and polio, were common. However, today vaccination programs have become somewhat controversial despite all the scientific evidence for their importance.
+
The controversy started with a paper5 published in 1988 and led by Andrew Wakefield claiming there was a link between the administration of the measles, mumps, and rubella (MMR) vaccine and the appearance of autism and bowel disease. Despite much scientific evidence contradicting this finding, sensationalist media reports and fear-mongering from conspiracy theorists led parts of the public into believing that vaccines were harmful. As a result, many parents ceased to vaccinate their children. This dangerous practice can be potentially disastrous given that the Centers for Disease Control (CDC) estimates that vaccinations will prevent more than 21 million hospitalizations and 732,000 deaths among children born in the last 20 years (see Benefits from Immunization during the Vaccines for Children Program Era — United States, 1994-2013, MMWR6). The 1988 paper has since been retracted and Andrew Wakefield was eventually “struck off the UK medical register, with a statement identifying deliberate falsification in the research published in The Lancet, and was thereby barred from practicing medicine in the UK.” (source: Wikipedia7). Yet misconceptions persist, in part due to self-proclaimed activists who continue to disseminate misinformation about vaccines.
+
Effective communication of data is a strong antidote to misinformation and fear-mongering. Earlier we used an example provided by a Wall Street Journal article8 showing data related to the impact of vaccines on battling infectious diseases. Here we reconstruct that example.
+
+10.9.1 Data
+
The data used for these plots were collected, organized, and distributed by the Tycho Project9. They include weekly reported counts for seven diseases from 1928 to 2011, from all fifty states. We include the yearly totals in the dslabs package:
We create a temporary object dat that stores only the measles data, includes a per 100,000 rate, orders states by average value of disease and removes Alaska and Hawaii since they only became states in the late 1950s. Note that there is a weeks_reporting column that tells us for how many weeks of the year data was reported. We have to adjust for that value when computing the rate.
We add a vertical line at 1963 since this is when the vaccine was introduced [Control, Centers for Disease; Prevention (2014). CDC health information for international travel 2014 (the yellow book). p. 250. ISBN 9780199948505].
+
Now can we show data for all states in one plot? We have three variables to show: year, state, and rate. In the WSJ figure, they use the x-axis for year, the y-axis for state, and color hue to represent rates. However, the color scale they use, which goes from yellow to blue to green to orange to red, can be improved.
+
In our example, we want to use a sequential palette since there is no meaningful center, just low and high rates.
+
We use the geometry geom_tile to tile the region with colors representing disease rates. We use a square root transformation to avoid having the really high counts dominate the plot. Notice that missing values are shown in grey. Note that once a disease was pretty much eradicated, some states stopped reporting cases all together. This is why we see so much grey after 1980.
This plot makes a very striking argument for the contribution of vaccines. However, one limitation of this plot is that it uses color to represent quantity, which we earlier explained makes it harder to know exactly how high values are going. Position and lengths are better cues. If we are willing to lose state information, we can make a version of the plot that shows the values with position. We can also show the average for the US, which we compute like this:
Now to make the plot we simply use the geom_line geometry:
+
+
dat|>
+filter(!is.na(rate))|>
+ggplot()+
+geom_line(aes(year, rate, group =state), color ="grey50",
+ show.legend =FALSE, alpha =0.2, size =1)+
+geom_line(mapping =aes(year, us_rate), data =avg, size =1)+
+scale_y_continuous(trans ="sqrt", breaks =c(5, 25, 125, 300))+
+ggtitle("Cases per 10,000 by state")+
+xlab("")+ylab("")+
+geom_text(data =data.frame(x =1955, y =50),
+ mapping =aes(x, y, label ="US average"),
+ color ="black")+
+geom_vline(xintercept =1963, col ="blue")
+#> Warning: Using `size` aesthetic for lines was deprecated in ggplot2 3.4.0.
+#> ℹ Please use `linewidth` instead.
+
+
+
+
+
+
+
In theory, we could use color to represent the categorical value state, but it is hard to pick 50 distinct colors.
+
+10.10 Exercises
+
+
Reproduce the image plot we previously made but for smallpox. For this plot, do not include years in which cases were not reported in 10 or more weeks.
+
Now reproduce the time series plot we previously made, but this time following the instructions of the previous question for smallpox.
+
For the state of California, make a time series plot showing rates for all diseases. Include only years with 10 or more weeks reporting. Use a different color for each disease.
+
Now do the same for the rates for the US. Hint: compute the US rate by using summarize: the total divided by total population.
The instructions below include screen shots from the installation process in which we used the Chrome browser which, although not necessary, you can freely download and install from here: https://www.google.com/chrome/.
+
+
18.1 Installing R
+
RStudio is an interactive desktop environment, but it is not R, nor does it include R when you download and install it. Therefore, to use RStudio, we first need to install R.
+
+
You can download R from the Comprehensive R Archive Network (CRAN)1. Search for CRAN on your browser:
+
+
+
+
+
+
Once on the CRAN page, select the version for your operating system: Linux, Mac OS X, or Windows.
+
+
+
+
+
Here we show screenshots for Windows, but the process is similar for the other platforms. When they differ, we will also show screenshots for Mac OS X.
+
+
Once at the CRAN download page, you will have several choices. You want to install the base subdirectory. This installs the basic packages you need to get started. We will later learn how to install other needed packages from within R, rather than from this webpage.
+
+
+
+
+
+
Click on the link for the latest version to start the download.
+
+
+
+
+
+
If you are using Chrome, at the bottom of your browser you should see a tab that shows you the progress of the download. Once the installer file downloads, you can click on that tab to start the installation process. Other browsers may be different, so you will have to find where they store downloaded files and click on them to get the process started.
+
+
+
+
+
If using Safari on a Mac, you can access the download through the download button.
+
+
+
+
+
You can now click through different choices to finish the installation. We recommend you select all the default choices.
+
+
+
+
+
Select the default even when you get an ominous warning.
+
+
+
+
When selecting the language, consider that it will be easier to follow this book if you select English.
+
+
+
+
Continue to select all the defaults:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
On the Mac it looks different, but you are also accepting the defaults:
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Congratulations! You have installed R.
+
+
+
18.2 Installing RStudio
+
+
You can start by searching for RStudio on your browser:
+
+
+
+
+
+
You should find the RStudio website as shown above. Once there, click on Download RStudio.
+
+
+
+
+
+
This will give you several options. For what we do in this book, it is more than enough to use the free Desktop version:
+
+
+
+
+
+
Once you select this option, it will take you to a page in which the operating system options are provided. Click the link showing your operating system.
+
+
+
+
+
+
Once the installation file is downloaded, click on the downloaded file to start the installation process:
+
+
+
+
+
+
We recommend clicking yes on all the defaults.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
On the Mac, there are fewer clicks. You basically drag and drop the RStudio icon into the Applications folder icon here:
+
+
+
+
Congratulations! You have installed RStudio. You can now get started as you do on any other program in your computer. On Windows, you can open RStudio from the Start menu. If RStudio does not appear, you can search for it:
+
+
+
+
On the Mac, it will be in the Applications folder:
+
+
+
+
+
+
+
+
+
+
+
Pro tip for the Mac: To avoid using the mouse to open RStudio, hit command+spacebar to open Spotlight Search and type RStudio into that search bar, then hit enter.
Here we show the data.table version of some of the tidyverse commands we previously showed. The data.table functions are faster and more efficient with memory. In general, eveyrthing you can with tidyverse has a way to do it with **data.table* and R base which, although perhaps harder to read, it is often more flexible, faster, and more efficient. Here we show just a few examples, but you can learn others using internet searches or code generation tools.
tab<-left_join(murders, results_us_election_2016, by ="state")
+
+
In data.table the merge functions works similarly:
+
+
tab<-merge(murders, results_us_election_2016, by ="state", all.x =TRUE)
+
+
Instead of defining different functions for the different type of joins, merge uses the the logical arguments all (full join), all.x (left join), and all.y (right join).
+
+14.4 Dates and times
+
The data.table package also includes some of the functionality to lubridate. For example, it includes the mday, month, and year functions:
Other similar functions are second, minute, hour, wday, week, isoweekquarter, yearmon, yearqtr.
+
The package also includes the class IDate and ITime, which store dates and times more efficiently, convenient for large files with date stamps. You convert dates in the usual R format using as.IDate and as.ITime.
With the exception of labels used to represent categorical data, we have focused on numerical data. But in many applications, data starts as text. Well-known examples are spam filtering, cyber-crime prevention, counter-terrorism and sentiment analysis. In all these cases, the raw data is composed of free form text. Our task is to extract insights from these data. In this section, we learn how to generate useful numerical summaries from text data to which we can apply some of the powerful data visualization and analysis techniques we have learned.
+
+17.1 Case study: Trump tweets
+
During the 2016 US presidential election, then candidate Donald J. Trump used his twitter account as a way to communicate with potential voters. On August 6, 2016, Todd Vaziri tweeted1 about Trump that “Every non-hyperbolic tweet is from iPhone (his staff). Every hyperbolic tweet is from Android (from him).” David Robinson conducted an analysis2 to determine if data supported this assertion. Here, we go through David’s analysis to learn some of the basics of text analysis. To learn more about text analysis in R, we recommend the Text Mining with R book3 by Julia Silge and David Robinson.
X.com (formerly twitter) provides an API that permits downloading tweets. Brendan Brown runs the trump archive^[https://www.thetrumparchive.com/], which compiles tweet data from Trump’s account. The dslabs package includes tweets from the following range:
The help file ?trump_tweets provides details on what each variable represents. The actual tweets are in the text variable:
+
+
trump_tweets$text[16413]|>str_wrap(width =options()$width)|>cat()
+#> Great to be back in Iowa! #TBT with @JerryJrFalwell joining me in
+#> Davenport- this past winter. #MAGA https://t.co/A5IF0QHnic
+
+
and the source variable tells us which device was used to compose and upload each tweet:
+
+
trump_tweets|>count(source)|>arrange(desc(n))|>head(5)
+#> source n
+#> 1 Twitter Web Client 10718
+#> 2 Twitter for Android 4652
+#> 3 Twitter for iPhone 3962
+#> 4 TweetDeck 468
+#> 5 TwitLonger Beta 288
+
+
We are interested in what happened during the 2016 campaign, so for this analysis we will focus on what was tweeted between the day Trump announced his campaign and election day. We define the following table containing just the tweets from that time period. We remove the Twitter for part of the source and filter out retweets.
We can now use data visualization to explore the possibility that two different groups were tweeting from these devices. For each tweet, we will extract the hour, East Coast time (EST), it was tweeted and then compute the proportion of tweets tweeted at each hour for each device:
We notice a big peak for the Android in the early hours of the morning, between 6 and 8 AM. There seems to be a clear difference in these patterns. We will therefore assume that two different entities are using these two devices.
+
We will now study how the tweets differ when we compare Android to iPhone. To do this, we introduce the tidytext package.
+
+17.2 Text as data
+
The tidytext package helps us convert free form text into a tidy table. Having the data in this format greatly facilitates data visualization and the use of statistical techniques.
+
The main function needed to achieve this is unnest_tokens. A token refers to a unit that we are considering to be a data point. The most common token will be words, but they can also be single characters, n-grams, sentences, lines, or a pattern defined by a regex. The functions will take a vector of strings and extract the tokens so that each one gets a row in the new table. Here is a simple example:
+
+
poem<-c("Roses are red,", "Violets are blue,",
+"Sugar is sweet,", "And so are you.")
+example<-tibble(line =c(1, 2, 3, 4),
+ text =poem)
+example
+#> # A tibble: 4 × 2
+#> line text
+#> <dbl> <chr>
+#> 1 1 Roses are red,
+#> 2 2 Violets are blue,
+#> 3 3 Sugar is sweet,
+#> 4 4 And so are you.
+example|>unnest_tokens(word, text)
+#> # A tibble: 13 × 2
+#> line word
+#> <dbl> <chr>
+#> 1 1 roses
+#> 2 1 are
+#> 3 1 red
+#> 4 2 violets
+#> 5 2 are
+#> # ℹ 8 more rows
+
+
Now let’s look at trump tweet. We will look at tweet number 3008 because it will later permit us to illustrate a couple of points:
+
+
i<-3008
+campaign_tweets$text[i]|>str_wrap(width =65)|>cat()
+#> Great to be back in Iowa! #TBT with @JerryJrFalwell joining me in
+#> Davenport- this past winter. #MAGA https://t.co/A5IF0QHnic
+campaign_tweets[i,]|>
+unnest_tokens(word, text)|>
+pull(word)
+#> [1] "great" "to" "be" "back"
+#> [5] "in" "iowa" "tbt" "with"
+#> [9] "jerryjrfalwell" "joining" "me" "in"
+#> [13] "davenport" "this" "past" "winter"
+#> [17] "maga" "https" "t.co" "a5if0qhnic"
+
+
Note that the function tries to convert tokens into words. A minor adjustment is to remove the links to pictures:
And we can now answer questions such as “what are the most commonly used words?”:
+
+
tweet_words|>
+count(word)|>
+arrange(desc(n))
+#> # A tibble: 6,264 × 2
+#> word n
+#> <chr> <int>
+#> 1 the 2330
+#> 2 to 1413
+#> 3 and 1245
+#> 4 in 1190
+#> 5 i 1151
+#> # ℹ 6,259 more rows
+
+
It is not surprising that these are the top words, which are not informative. The tidytext package has a database of these commonly used words, referred to as stop words, in text analysis:
+
+
head(stop_words)
+#> # A tibble: 6 × 2
+#> word lexicon
+#> <chr> <chr>
+#> 1 a SMART
+#> 2 a's SMART
+#> 3 able SMART
+#> 4 about SMART
+#> 5 above SMART
+#> # ℹ 1 more row
+
+
If we filter out rows representing stop words with filter(!word %in% stop_words$word):
we end up with a much more informative set of top 10 tweeted words:
+
+
tweet_words|>
+count(word)|>
+top_n(10, n)|>
+mutate(word =reorder(word, n))|>
+arrange(desc(n))
+#> # A tibble: 10 × 2
+#> word n
+#> <fct> <int>
+#> 1 trump2016 415
+#> 2 hillary 407
+#> 3 people 304
+#> 4 makeamericagreatagain 298
+#> 5 america 255
+#> # ℹ 5 more rows
+
+
Some exploration of the resulting words (not shown here) reveals a couple of unwanted characteristics in our tokens. First, some of our tokens are just numbers (years, for example). We want to remove these and we can find them using the regex ^\d+$. Second, some of our tokens come from a quote and they start with '. We want to remove the ' when it is at the start of a word so we will just str_replace. We add these two lines to the code above to generate our final table:
Now that we have all our words in a table, along with information about what device was used to compose the tweet they came from, we can start exploring which words are more common when comparing Android to iPhone.
+
For each word, we want to know if it is more likely to come from an Android tweet or an iPhone tweet. We therefore compute, for each word, what proportion of all words it represent for Android and iPhone, respectively.
We already see somewhat of a pattern in the types of words that are being tweeted more from one device versus the other. However, we are not interested in specific words but rather in the tone. Vaziri’s assertion is that the Android tweets are more hyperbolic. So how can we check this with data? Hyperbolic is a hard sentiment to extract from words as it relies on interpreting phrases. However, words can be associated to more basic sentiment such as anger, fear, joy, and surprise. In the next section, we demonstrate basic sentiment analysis.
+
+17.3 Sentiment analysis
+
In sentiment analysis, we assign a word to one or more “sentiments”. Although this approach will miss context-dependent sentiments, such as sarcasm, when performed on large numbers of words, summaries can provide insights.
+
The first step in sentiment analysis is to assign a sentiment to each word. As we demonstrate, the tidytext package includes several maps or lexicons. We textdata package includes several of these lexicons.
+
The bing lexicon divides words into positive and negative sentiments. We can see this using the tidytext function get_sentiments:
The AFINN lexicon assigns a score between -5 and 5, with -5 the most negative and 5 the most positive. Note that this lexicon needs to be downloaded the first time you call the function get_sentiment:
We can combine the words and sentiments using inner_join, which will only keep words associated with a sentiment. Here are 10 random words extracted from the tweets:
+
+
tweet_words|>inner_join(nrc, by ="word", relationship ="many-to-many")|>
+select(source, word, sentiment)|>
+sample_n(5)
+#> # A tibble: 5 × 3
+#> source word sentiment
+#> <chr> <chr> <chr>
+#> 1 Android enjoy joy
+#> 2 iPhone terrific sadness
+#> 3 iPhone tactics trust
+#> 4 Android clue anticipation
+#> 5 iPhone change fear
+
+
Now we are ready to perform a quantitative analysis comparing Android and iPhone by comparing the sentiments of the tweets posted from each device. Here we could perform a tweet-by-tweet analysis, assigning a sentiment to each tweet. However, this will be challenging since each tweet will have several sentiments attached to it, one for each word appearing in the lexicon. For illustrative purposes, we will perform a much simpler analysis: we will count and compare the frequencies of each sentiment appearing in each device.
You can see the books that are available like this:
+
+
gutenberg_metadata
+
+
1. Use str_detect to find the ID of the novel Pride and Prejudice.
+
2. We notice that there are several versions. The gutenberg_works() function filters this table to remove replicates and include only English language works. Read the help file and use this function to find the ID for Pride and Prejudice.
+
3. Use the gutenberg_download function to download the text for Pride and Prejudice. Save it to an object called book.
+
4. Use the tidytext package to create a tidy table with all the words in the text. Save the table in an object called words
+
5. We will later make a plot of sentiment versus location in the book. For this, it will be useful to add a column with the word number to the table.
+
6. Remove the stop words and numbers from the words object. Hint: use the anti_join.
+
7. Now use the AFINN lexicon to assign a sentiment value to each word.
+
8. Make a plot of sentiment score versus location in the book and add a smoother.
+
9. Assume there are 300 words per page. Convert the locations to pages and then compute the average sentiment in each page. Plot that average score by page. Add a smoother that appears to go through data.