15 Numbers
You are reading the workinprogress second edition of R for Data Science. This chapter should be readable but is currently undergoing final polishing. You can find the complete first edition at https://r4ds.had.co.nz.
15.1 Introduction
In this chapter, you’ll learn useful tools for creating and manipulating numeric vectors. We’ll start by going into a little more detail of count()
before diving into various numeric transformations. You’ll then learn about more general transformations that can be applied to other types of vector, but are often used with numeric vectors. Then you’ll learn about a few more useful summaries and how they can also be used with mutate()
.
15.1.1 Prerequisites
This chapter mostly uses functions from base R, which are available without loading any packages. But we still need the tidyverse because we’ll use these base R functions inside of tidyverse functions like mutate()
and filter()
. Like in the last chapter, we’ll use real examples from nycflights13, as well as toy examples made with c()
and tribble()
.
15.1.2 Counts
It’s surprising how much data science you can do with just counts and a little basic arithmetic, so dplyr strives to make counting as easy as possible with count()
. This function is great for quick exploration and checks during analysis:
flights > count(dest)
#> # A tibble: 105 × 2
#> dest n
#> <chr> <int>
#> 1 ABQ 254
#> 2 ACK 265
#> 3 ALB 439
#> 4 ANC 8
#> 5 ATL 17215
#> 6 AUS 2439
#> # … with 99 more rows
(Despite the advice in Chapter Chapter 7, I usually put count()
on a single line because I’m usually using it at the console for a quick check that my calculation is working as expected.)
If you want to see the most common values add sort = TRUE
:
flights > count(dest, sort = TRUE)
#> # A tibble: 105 × 2
#> dest n
#> <chr> <int>
#> 1 ORD 17283
#> 2 ATL 17215
#> 3 LAX 16174
#> 4 BOS 15508
#> 5 MCO 14082
#> 6 CLT 14064
#> # … with 99 more rows
And remember that if you want to see all the values, you can use > View()
or > print(n = Inf)
.
You can perform the same computation “by hand” with group_by()
, summarise()
and n()
. This is useful because it allows you to compute other summaries at the same time:
n()
is a special summary function that doesn’t take any arguments and instead access information about the “current” group. This means that it only works inside dplyr verbs:
n()
#> Error in `n()`:
#> ! Must be used inside dplyr verbs.
There are a couple of variants of n()
that you might find useful:

n_distinct(x)
counts the number of distinct (unique) values of one or more variables. For example, we could figure out which destinations are served by the most carriers:flights > group_by(dest) > summarise( carriers = n_distinct(carrier) ) > arrange(desc(carriers)) #> # A tibble: 105 × 2 #> dest carriers #> <chr> <int> #> 1 ATL 7 #> 2 BOS 7 #> 3 CLT 7 #> 4 ORD 7 #> 5 TPA 7 #> 6 AUS 6 #> # … with 99 more rows

A weighted count is a sum. For example you could “count” the number of miles each plane flew:
Weighted counts are a common problem so
count()
has awt
argument that does the same thing:flights > count(tailnum, wt = distance) #> # A tibble: 4,044 × 2 #> tailnum n #> <chr> <dbl> #> 1 D942DN 3418 #> 2 N0EGMQ 250866 #> 3 N10156 115966 #> 4 N102UW 25722 #> 5 N103US 24619 #> 6 N104UW 25157 #> # … with 4,038 more rows

You can count missing values by combining
sum()
andis.na()
. In the flights dataset this represents flights that are cancelled:
15.1.3 Exercises
 How can you use
count()
to count the number rows with a missing value for a given variable?  Expand the following calls to
count()
to instead usegroup_by()
,summarise()
, andarrange()
:flights > count(dest, sort = TRUE)
flights > count(tailnum, wt = distance)
15.2 Numeric transformations
Transformation functions work well with mutate()
because their output is the same length as the input. The vast majority of transformation functions are already built into base R. It’s impractical to list them all so this section will give show the most useful. As an example, while R provides all the trigonometric functions that you might dream of, I don’t list them here because they’re rarely needed for data science.
15.2.1 Arithmetic and recycling rules
We introduced the basics of arithmetic (+
, 
, *
, /
, ^
) in Chapter Chapter 3 and have used them a bunch since. These functions don’t need a huge amount of explanation because they do what you learned in grade school. But we need to briefly talk about the recycling rules which determine what happens when the left and right hand sides have different lengths. This is important for operations like flights > mutate(air_time = air_time / 60)
because there are 336,776 numbers on the left of /
but only one on the right.
R handles mismatched lengths by recycling, or repeating, the short vector. We can see this in operation more easily if we create some vectors outside of a data frame:
Generally, you only want to recycle single numbers (i.e. vectors of length 1), but R will recycle any shorter length vector. It usually (but not always) warning if the longer vector isn’t a multiple of the shorter:
These recycling rules are also applied to logical comparisons (==
, <
, <=
, >
, >=
, !=
) and can lead to a surprising result if you accidentally use ==
instead of %in%
and the data frame has an unfortunate number of rows. For example, take this code which attempts to find all flights in January and February:
flights >
filter(month == c(1, 2))
#> # A tibble: 25,977 × 19
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 542 540 2 923 850
#> 3 2013 1 1 554 600 6 812 837
#> 4 2013 1 1 555 600 5 913 854
#> 5 2013 1 1 557 600 3 838 846
#> 6 2013 1 1 558 600 2 849 851
#> # … with 25,971 more rows, and 11 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>
The code runs without error, but it doesn’t return what you want. Because of the recycling rules it finds flights in odd numbered rows that departed in January and flights in even numbered rows that departed in February. And unforuntately there’s no warning because nycflights
has an even number of rows.
To protect you from this type of silent failure, most tidyverse functions use a stricter form of recycling that only recycles single values. Unfortunately that doesn’t help here, or in many other cases, because the key computation is performed by the base R function ==
, not filter()
.
15.2.2 Minimum and maximum
The arithmetic functions work with pairs of variables. Two closely related functions are pmin()
and pmax()
, which when given two or more variables will return the smallest or largest value in each row:
Note that these are different to the summary functions min()
and max()
which take multiple observations and return a single value. You can tell that you’ve used the wrong form when all the minimums and all the maximums have the same value:
15.2.3 Modular arithmetic
Modular arithmetic is the technical name for the type of math you did before you learned about real numbers, i.e. division that yields a whole number and a remainder. In R, %/%
does integer division and %%
computes the remainder:
Modular arithmetic is handy for the flights dataset, because we can use it to unpack the sched_dep_time
variable into and hour
and minute
:
We can combine that with the mean(is.na(x))
trick from Section 14.4 to see how the proportion of cancelled flights varies over the course of the day. The results are shown in Figure 15.1.
15.2.4 Logarithms
Logarithms are an incredibly useful transformation for dealing with data that ranges across multiple orders of magnitude. They also convert exponential growth to linear growth. For example, take compounding interest — the amount of money you have at year + 1
is the amount of money you had at year
multiplied by the interest rate. That gives a formula like money = starting * interest ^ year
:
starting < 100
interest < 1.05
money < tibble(
year = 2000 + 1:50,
money = starting * interest^(1:50)
)
If you plot this data, you’ll get an exponential curve:
Log transforming the yaxis gives a straight line:
ggplot(money, aes(year, money)) +
geom_line() +
scale_y_log10()
This a straight line because a little algebra reveals that log(money) = log(starting) + n * log(interest)
, which matches the pattern for a line, y = m * x + b
. This is a useful pattern: if you see a (roughly) straight line after logtransforming the yaxis, you know that there’s underlying exponential growth.
If you’re logtransforming your data with dplyr you have a choice of three logarithms provided by base R: log()
(the natural log, base e), log2()
(base 2), and log10()
(base 10). I recommend using log2()
or log10()
. log2()
is easy to interpret because difference of 1 on the log scale corresponds to doubling on the original scale and a difference of 1 corresponds to halving; whereas log10()
is easy to backtransform because (e.g) 3 is 10^3 = 1000.
The inverse of log()
is exp()
; to compute the inverse of log2()
or log10()
you’ll need to use 2^
or 10^
.
15.2.5 Rounding
Use round(x)
to round a number to the nearest integer:
round(123.456)
#> [1] 123
You can control the precision of the rounding with the second argument, digits
. round(x, digits)
rounds to the nearest 10^n
so digits = 2
will round to the nearest 0.01. This definition is useful because it implies round(x, 3)
will round to the nearest thousand, which indeed it does:
There’s one weirdness with round()
that seems surprising at first glance:
round()
uses what’s known as “round half to even” or Banker’s rounding: if a number is half way between two integers, it will be rounded to the even integer. This is a good strategy because it keeps the rounding unbiased: half of all 0.5s are rounded up, and half are rounded down.
round()
is paired with floor()
which always rounds down and ceiling()
which always rounds up:
These functions don’t have a digits argument, so you can instead scale down, round, and then scale back up:
You can use the same technique if you want to round()
to a multiple of some other number:
15.2.6 Cumulative and rolling aggregates
Base R provides cumsum()
, cumprod()
, cummin()
, cummax()
for running, or cumulative, sums, products, mins and maxes. dplyr provides cummean()
for cumulative means. Cumulative sums tend to come up the most in practice:
x < 1:10
cumsum(x)
#> [1] 1 3 6 10 15 21 28 36 45 55
If you need more complex rolling or sliding aggregates, try the slider package by Davis Vaughan. The following example illustrates some of its features.
library(slider)
# Same as a cumulative sum
slide_vec(x, sum, .before = Inf)
#> [1] 1 3 6 10 15 21 28 36 45 55
# Sum the current element and the one before it
slide_vec(x, sum, .before = 1)
#> [1] 1 3 5 7 9 11 13 15 17 19
# Sum the current element and the two before and after it
slide_vec(x, sum, .before = 2, .after = 2)
#> [1] 6 10 15 20 25 30 35 40 34 27
# Only compute if the window is complete
slide_vec(x, sum, .before = 2, .after = 2, .complete = TRUE)
#> [1] NA NA 15 20 25 30 35 40 NA NA
15.2.7 Exercises
Explain in words what each line of the code used to generate Figure 15.1 does.
What trigonometric functions does R provide? Guess some names and look up the documentation. Do they use degrees or radians?

Currently
dep_time
andsched_dep_time
are convenient to look at, but hard to compute with because they’re not really continuous numbers. You can see the basic problem in this plot: there’s a gap between each hour.flights > filter(month == 1, day == 1) > ggplot(aes(sched_dep_time, dep_delay)) + geom_point() #> Warning: Removed 4 rows containing missing values (geom_point).
Convert them to a more truthful representation of time (either fractional hours or minutes since midnight).
15.3 General transformations
The following sections describe some general transformations which are often used with numeric vectors, but can be applied to all other column types.
15.3.1 Fill in missing values
You can fill in missing values with dplyr’s coalesce()
:
coalesce()
is vectorised, so you can find the nonmissing values from a pair of vectors:
15.3.2 Ranks
dplyr provides a number of ranking functions inspired by SQL, but you should always start with dplyr::min_rank()
. It uses the typical method for dealing with ties, e.g. 1st, 2nd, 2nd, 4th.
Note that the smallest values get the lowest ranks; use desc(x)
to give the largest values the smallest ranks:
If min_rank()
doesn’t do what you need, look at the variants dplyr::row_number()
, dplyr::dense_rank()
, dplyr::percent_rank()
, and dplyr::cume_dist()
. See the documentation for details.
df < tibble(x = x)
df >
mutate(
row_number = row_number(x),
dense_rank = dense_rank(x),
percent_rank = percent_rank(x),
cume_dist = cume_dist(x)
)
#> # A tibble: 6 × 5
#> x row_number dense_rank percent_rank cume_dist
#> <dbl> <int> <int> <dbl> <dbl>
#> 1 1 1 1 0 0.2
#> 2 2 2 2 0.25 0.6
#> 3 2 3 2 0.25 0.6
#> 4 3 4 3 0.75 0.8
#> 5 4 5 4 1 1
#> 6 NA NA NA NA NA
You can achieve many of the same results by picking the appropriate ties.method
argument to base R’s rank()
; you’ll probably also want to set na.last = "keep"
to keep NA
s as NA
.
row_number()
can also be used without any arguments when inside a dplyr verb. In this case, it’ll give the number of the “current” row. When combined with %%
or %/%
this can be a useful tool for dividing data into similarly sized groups:
df < tibble(x = runif(10))
df >
mutate(
row0 = row_number()  1,
three_groups = row0 %/% (n() / 3),
three_in_each_group = row0 %/% 3,
)
#> # A tibble: 10 × 4
#> x row0 three_groups three_in_each_group
#> <dbl> <dbl> <dbl> <dbl>
#> 1 0.0808 0 0 0
#> 2 0.834 1 0 0
#> 3 0.601 2 0 0
#> 4 0.157 3 0 1
#> 5 0.00740 4 1 1
#> 6 0.466 5 1 1
#> # … with 4 more rows
15.3.3 Offsets
dplyr::lead()
and dplyr::lag()
allow you to refer the values just before or just after the “current” value. They return a vector of the same length as the input, padded with NA
s at the start or end:

x  lag(x)
gives you the difference between the current and previous value.x  lag(x) #> [1] NA 3 6 0 8 16

x == lag(x)
tells you when the current value changes. This is often useful combined with the grouping trick described in Section 14.6.x == lag(x) #> [1] NA FALSE FALSE TRUE FALSE FALSE
You can lead or lag by more than one position by using the second argument, n
.
15.3.4 Exercises
Find the 10 most delayed flights using a ranking function. How do you want to handle ties? Carefully read the documentation for
min_rank()
.Which plane (
tailnum
) has the worst ontime record?What time of day should you fly if you want to avoid delays as much as possible?
What does
flights > group_by(dest() > filter(row_number() < 4)
do? What doesflights > group_by(dest() > filter(row_number(dep_delay) < 4)
do?For each destination, compute the total minutes of delay. For each flight, compute the proportion of the total delay for its destination.

Delays are typically temporally correlated: even once the problem that caused the initial delay has been resolved, later flights are delayed to allow earlier flights to leave. Using
lag()
, explore how the average flight delay for an hour is related to the average delay for the previous hour. Look at each destination. Can you find flights that are suspiciously fast? (i.e. flights that represent a potential data entry error). Compute the air time of a flight relative to the shortest flight to that destination. Which flights were most delayed in the air?
Find all destinations that are flown by at least two carriers. Use those destinations to come up with a relative ranking of the carriers based on their performance for the same destination.
15.4 Summaries
Just using the counts, means, and sums that we’ve introduced already can get you a long way, but R provides many other useful summary functions. Here are a selection that you might find useful.
15.4.1 Center
So far, we’ve mostly used mean()
to summarize the center of a vector of values. Because the mean is the sum divided by the count, it is sensitive to even just a few unusually high or low values. An alternative is to use the median()
, which finds a value that lies in the “middle” of the vector, i.e. 50% of the values is above it and 50% are below it. Depending on the shape of the distribution of the variable you’re interested in, mean or median might be a better measure of center. For example, for symmetric distributions we generally report the mean while for skewed distributions we usually report the median.
Figure 15.2 compares the mean vs the median when looking at the hourly vs median departure delay. The median delay is always smaller than the mean delay because because flight sometimes leave multiple hours late, but never leave multiple hours early.
flights >
group_by(year, month, day) >
summarise(
mean = mean(dep_delay, na.rm = TRUE),
median = median(dep_delay, na.rm = TRUE),
n = n(),
.groups = "drop"
) >
ggplot(aes(mean, median)) +
geom_abline(slope = 1, intercept = 0, color = "white", size = 2) +
geom_point()
You might also wonder about the mode, or the most common value. This is a summary that only works well for very simple cases (which is why you might have learned about it in high school), but it doesn’t work well for many real datasets. If the data is discrete, there may be multiple most common values, and if the data is continuous, there might be no most common value because every value is every so slightly different. For these reasons, the mode tends not to be used by statisticians and there’s no mode function included in base R^{1}.
15.4.2 Minimum, maximum, and quantiles
What if you’re interested in locations other than the center? min()
and max()
will give you the largest and smallest values. Another powerful tool is quantile()
which is a generalization of the median: quantile(x, 0.25)
will find the value of x
that is greater than 25% of the values, quantile(x, 0.5)
is equivalent to the median, and quantile(x, 0.95)
will find a value that’s greater than 95% of the values.
For the flights data, you might want to look at the 95% quantile of delays rather than the maximum, because it will ignore the 5% of most delayed flights which can be quite extreme.
flights >
group_by(year, month, day) >
summarise(
max = max(dep_delay, na.rm = TRUE),
q95 = quantile(dep_delay, 0.95, na.rm = TRUE),
.groups = "drop"
)
#> # A tibble: 365 × 5
#> year month day max q95
#> <int> <int> <int> <dbl> <dbl>
#> 1 2013 1 1 853 70.1
#> 2 2013 1 2 379 85
#> 3 2013 1 3 291 68
#> 4 2013 1 4 288 60
#> 5 2013 1 5 327 41
#> 6 2013 1 6 202 51
#> # … with 359 more rows
15.4.3 Spread
Sometimes you’re not so interested in where the bulk of the data lies, but how spread out it. Two commonly used summaries are the standard deviation, sd(x)
, and the interquartile range, IQR()
. I won’t explain sd()
here since you’re probably already familiar with it, but IQR()
might be new — it’s quantile(x, 0.75)  quantile(x, 0.25)
and gives you the range that contains the middle 50% of the data.
We can use this to reveal a small oddity in the flights data. You might expect that the spread of the distance between origin and destination to be zero, since airports are always in the same place. But the code below makes it looks like one airport, EGE, might have moved.
15.4.4 Distributions
It’s worth remembering that all of the summary statistics described above are a way of reducing the distribution down to a single number. This means that they’re fundamentally reductive, and if you pick the wrong summary, you can easily miss important differences between groups. That’s why it’s always a good idea to visualize the distribution before committing to your summary statistics.
Figure 15.3 shows the overall distribution of departure delays. The distribution is so skewed that we have to zoom in to see the bulk of the data. This suggests that the mean is unlikely to be a good summary and we might prefer the median instead.
flights >
ggplot(aes(dep_delay)) +
geom_histogram(binwidth = 15)
#> Warning: Removed 8255 rows containing nonfinite values (stat_bin).
flights >
filter(dep_delay <= 120) >
ggplot(aes(dep_delay)) +
geom_histogram(binwidth = 5)
It’s also a good idea to check that distributions for subgroups resemble the whole. Figure 15.4 overlays a frequency polygon for each day. The distributions seem to follow a common pattern, suggesting it’s fine to use the same summary for each day.
flights >
filter(dep_delay < 120) >
ggplot(aes(dep_delay, group = interaction(day, month))) +
geom_freqpoly(binwidth = 5, alpha = 1/5)
Don’t be afraid to explore your own custom summaries specifically tailored for the data that you’re working with. In this case, that might mean separately summarizing the flights that left early vs the flights that left late, or given that the values are so heavily skewed, you might try a logtransformation. Finally, don’t forget what you learned in Section 4.5: whenever creating numerical summaries, it’s a good idea to include the number of observations in each group.
15.4.5 Positions
There’s one final type of summary that’s useful for numeric vectors, but also works with every other type of value: extracting a value at specific position. You can do this with the base R [
function, but we’re not cover it until Section 30.4.5, because it’s a very powerful and general function. For now we’ll introduce three specialized functions that you can use to extract values at a specified position: first(x)
, last(x)
, and nth(x, n)
.
For example, we can find the first and last departure for each day:
flights >
group_by(year, month, day) >
summarise(
first_dep = first(dep_time),
fifth_dep = nth(dep_time, 5),
last_dep = last(dep_time)
)
#> `summarise()` has grouped output by 'year', 'month'. You can override using the
#> `.groups` argument.
#> # A tibble: 365 × 6
#> # Groups: year, month [12]
#> year month day first_dep fifth_dep last_dep
#> <int> <int> <int> <int> <int> <int>
#> 1 2013 1 1 517 554 NA
#> 2 2013 1 2 42 535 NA
#> 3 2013 1 3 32 520 NA
#> 4 2013 1 4 25 531 NA
#> 5 2013 1 5 14 534 NA
#> 6 2013 1 6 16 555 NA
#> # … with 359 more rows
(These functions currently lack an na.rm
argument but will hopefully be fixed by the time you read this book: https://github.com/tidyverse/dplyr/issues/6242).
If you’re familiar with [
, you might wonder if you ever need these functions. I think there are main reasons: the default
argument and the order_by
argument. default
allows you to set a default value that’s use if the requested position doesn’t exist, e.g. you’re trying to get the 3rd element from a two element group. order_by
lets you locally override the existing ordering of the rows, so you can
Extracting values at positions is complementary to filtering on ranks. Filtering gives you all variables, with each observation in a separate row:
flights >
group_by(year, month, day) >
mutate(r = min_rank(desc(sched_dep_time))) >
filter(r %in% c(1, max(r)))
#> # A tibble: 1,195 × 20
#> # Groups: year, month, day [365]
#> year month day dep_time sched_dep_time dep_delay arr_time sched_arr_time
#> <int> <int> <int> <int> <int> <dbl> <int> <int>
#> 1 2013 1 1 517 515 2 830 819
#> 2 2013 1 1 2353 2359 6 425 445
#> 3 2013 1 1 2353 2359 6 418 442
#> 4 2013 1 1 2356 2359 3 425 437
#> 5 2013 1 2 42 2359 43 518 442
#> 6 2013 1 2 458 500 2 703 650
#> # … with 1,189 more rows, and 12 more variables: arr_delay <dbl>,
#> # carrier <chr>, flight <int>, tailnum <chr>, origin <chr>, dest <chr>,
#> # air_time <dbl>, distance <dbl>, hour <dbl>, minute <dbl>, time_hour <dttm>,
#> # r <int>
15.4.6 With mutate()
As the names suggest, the summary functions are typically paired with summarise()
. However, because of the recycling rules we discussed in Section 30.4.3 they can also be usefully paired with mutate()
, particularly when you want do some sort of group standardization. For example:

x / sum(x)
calculates the proportion of a total. 
(x  mean(x)) / sd(x)
computes a Zscore (standardized to mean 0 and sd 1). 
x / first(x)
computes an index based on the first observation.
15.4.7 Exercises

Brainstorm at least 5 different ways to assess the typical delay characteristics of a group of flights. Consider the following scenarios:
 A flight is 15 minutes early 50% of the time, and 15 minutes late 50% of the time.
 A flight is always 10 minutes late.
 A flight is 30 minutes early 50% of the time, and 30 minutes late 50% of the time.
 99% of the time a flight is on time. 1% of the time it’s 2 hours late.
Which do you think is more important: arrival delay or departure delay?
Which destinations show the greatest variation in air speed?
Create a plot to further explore the adventures of EGE. Can you find any evidence that the airport moved locations?