# 26  Functions

You are reading the work-in-progress second edition of R for Data Science. This chapter is currently a dumping ground for ideas, and we don’t recommend reading it. You can find the complete first edition at https://r4ds.had.co.nz.

## 26.1 Introduction

One of the best ways to improve your reach as a data scientist is to write functions. Functions allow you to automate common tasks in a more powerful and general way than copy-and-pasting. You should consider writing a function whenever you’ve copied and pasted a block of code more than twice (i.e. you now have three copies of the same code).

Writing a function has three big advantages over using copy-and-paste:

1. You can give a function an evocative name that makes your code easier to understand.

2. As requirements change, you only need to update code in one place, instead of many.

3. You eliminate the chance of making incidental mistakes when you copy and paste (i.e. updating a variable name in one place, but not in another).

Writing good functions is a lifetime journey. Even after using R for many years we still learn new techniques and better ways of approaching old problems. The goal of this chapter is to get you started on your journey with functions with three useful types of functions:

• Vector functions take one or more vectors as input and return a vector as output.
• Data frame functions take a data frame as input and return a data frame as output.
• Plot functions that take a data frame as input and return a plot as output.

The chapter concludes with some advice on function style.

### 26.1.1 Prerequisites

We’ll wrap up a variety of functions from around the tidyverse.

library(tidyverse)

## 26.2 Vector functions

We’ll begin with vector functions: functions that take one or more vectors and return a vector result.

For example, take a look at this code. What does it do?

df <- tibble(
a = rnorm(5),
b = rnorm(5),
c = rnorm(5),
d = rnorm(5),
)

df |> mutate(
a = (a - min(a, na.rm = TRUE)) /
(max(a, na.rm = TRUE) - min(a, na.rm = TRUE)),
b = (b - min(b, na.rm = TRUE)) /
(max(b, na.rm = TRUE) - min(a, na.rm = TRUE)),
c = (c - min(c, na.rm = TRUE)) /
(max(c, na.rm = TRUE) - min(c, na.rm = TRUE)),
d = (d - min(d, na.rm = TRUE)) /
(max(d, na.rm = TRUE) - min(d, na.rm = TRUE)),
)
#> # A tibble: 5 × 4
#>       a     b     c     d
#>   <dbl> <dbl> <dbl> <dbl>
#> 1 0.339  2.59 0.291 0
#> 2 0.880  0    0.611 0.557
#> 3 0      1.37 1     0.752
#> 4 0.795  1.37 0     1
#> 5 1      1.34 0.580 0.394

You might be able to puzzle out that this rescales each column to have a range from 0 to 1. But did you spot the mistake? When Hadley wrote this code he made an error when copying-and-pasting and forgot to change an a to a b. Preventing this type of mistake of is one very good reason to learn how to write functions.

### 26.2.1 Writing a function

To write a function you need to first analyse your repeated to figure what parts of the repeated code is constant and what parts vary. If we take the code above and pull it outside of mutate() it’s a little easier to see the pattern because each repetition is now one line:

(a - min(a, na.rm = TRUE)) / (max(a, na.rm = TRUE) - min(a, na.rm = TRUE))
(b - min(b, na.rm = TRUE)) / (max(b, na.rm = TRUE) - min(b, na.rm = TRUE))
(c - min(c, na.rm = TRUE)) / (max(c, na.rm = TRUE) - min(c, na.rm = TRUE))
(d - min(d, na.rm = TRUE)) / (max(d, na.rm = TRUE) - min(d, na.rm = TRUE))  

To make this a bit clearer I can replace the bit that varies with █:

(█ - min(█, na.rm = TRUE)) / (max(█, na.rm = TRUE) - min(█, na.rm = TRUE))

There’s only one thing that varies which implies I’m going to need a function with one argument.

To turn this into an actual function you need three things:

1. A name. Here we might use rescale01 because this function rescales a vector to lie between 0 and 1.

2. The arguments. The arguments are things that vary across calls. Here we have just one argument which we’re going to call x because this is a conventional name for a numeric vector.

3. The body. The body is the code that is the in all the calls.

Then you create a function by following the template:

name <- function(arguments) {
body
}

For this case that leads to:

rescale01 <- function(x) {
(x - min(x, na.rm = TRUE)) / (max(x, na.rm = TRUE) - min(x, na.rm = TRUE))
}

At this point you might test with a few simple inputs to make sure you’ve captured the logic correctly:

rescale01(c(-10, 0, 10))
#> [1] 0.0 0.5 1.0
rescale01(c(1, 2, 3, NA, 5))
#> [1] 0.00 0.25 0.50   NA 1.00

Then you can rewrite the call to mutate() as:

df |> mutate(
a = rescale01(a),
b = rescale01(b),
c = rescale01(c),
d = rescale01(d),
)
#> # A tibble: 5 × 4
#>       a     b     c     d
#>   <dbl> <dbl> <dbl> <dbl>
#> 1 0.339 1     0.291 0
#> 2 0.880 0     0.611 0.557
#> 3 0     0.530 1     0.752
#> 4 0.795 0.531 0     1
#> 5 1     0.518 0.580 0.394

(In Chapter 28, you’ll learn how to use across() to reduce the duplication even further so all you need is df |> mutate(across(a:d, rescale))).

### 26.2.2 Improving our function

You might notice rescale() function does some unnecessary work — instead of computing min() twice and max() once we could instead compute both the minimum and maximum in one step with range():

rescale01 <- function(x) {
rng <- range(x, na.rm = TRUE)
(x - rng[1]) / (rng[2] - rng[1])
}

Or you might try this function on a vector that includes an infinite value:

x <- c(1:10, Inf)
rescale01(x)
#>  [1]   0   0   0   0   0   0   0   0   0   0 NaN

That result is not particularly useful so we could ask range() to ignore infinite values:

rescale01 <- function(x) {
rng <- range(x, na.rm = TRUE, finite = TRUE)
(x - rng[1]) / (rng[2] - rng[1])
}
rescale01(x)
#>  [1] 0.0000000 0.1111111 0.2222222 0.3333333 0.4444444 0.5555556 0.6666667
#>  [8] 0.7777778 0.8888889 1.0000000       Inf

These changes illustrate an important benefit of functions: because we’ve moved the repeated code into a function, we only need to make the change in one place.

### 26.2.3 Mutate functions

Let’s look at a few more vector functions before you get some practice writing your own. We’ll start by looking at a few useful functions that work well in functions like mutate() and filter() because they return an output the same length as the input.

For example, maybe instead of rescaling to min 0, max 1, you want to rescale to mean zero, standard deviation one:

rescale_z <- function(x) {
(x - mean(x, na.rm = TRUE)) / sd(x, na.rm = TRUE)
}

Sometimes your functions are highly specialised for one data analysis. For example, you might have a bunch of variables that record missing values as 997, 998, or 999:

fix_na <- function(x) {
if_else(x %in% c(997, 998, 999), NA, x)
}

Other cases, you might be wrapping up a simple a case_when() to give it a standard name. For example, the clamp() function ensures all values of a vector lie in between a minimum or a maximum:

clamp <- function(x, min, max) {
case_when(
x < min ~ min,
x > max ~ max,
.default = x
)
}
clamp(1:10, min = 3, max = 7)
#>  [1] 3 3 3 4 5 6 7 7 7 7

Or maybe you’d rather mark those values as NAs:

discard_outside <- function(x, min, max) {
case_when(
x < min ~ NA,
x > max ~ NA,
.default = x
)
}
discard_outside(1:10, min = 3, max = 7)
#>  [1] NA NA  3  4  5  6  7 NA NA NA

Of course functions don’t just need to work with numeric variables. You might want to extract out some repeated string manipulation. Maybe you need to make the first character of each vector upper case:

first_upper <- function(x) {
str_sub(x, 1, 1) <- str_to_upper(str_sub(x, 1, 1))
x
}
first_upper("hello")
#> [1] "Hello"

Or maybe, like NV Labor Analysis, you want to strip percent signs, commas, and dollar signs from a string before converting it into a number:

clean_number <- function(x) {
is_pct <- str_detect(x, "%")
num <- x |>
str_remove_all("%") |>
str_remove_all(",") |>
str_remove_all(fixed("$")) |> as.numeric(x) if_else(is_pct, num / 100, num) } clean_number("$12,300")
#> [1] 12300
clean_number("45%")
#> [1] 0.45

### 26.2.4 Summary functions

In other cases you want a function that returns a single value for use in summary(). Sometimes this can just be a matter of setting a default argument:

commas <- function(x) {
str_flatten(x, collapse = ", ")
}
commas(c("cat", "dog", "pigeon"))
#> [1] "cat, dog, pigeon"

Or some very simple computation, for example to compute the coefficient of variation, which standardizes the standard deviation by dividing it by the mean:

cv <- function(x, na.rm = FALSE) {
sd(x, na.rm = na.rm) / mean(x, na.rm = na.rm)
}
cv(runif(100, min = 0, max = 50))
#> [1] 0.5196276
cv(runif(100, min = 0, max = 500))
#> [1] 0.5652554

Or maybe you just want to make a common pattern easier to remember by given it a memorable name:

# https://twitter.com/gbganalyst/status/1571619641390252033
n_missing <- function(x) {
sum(is.na(x))
} 

You can also write functions with multiple vector inputs. For example, maybe you want to compute the mean absolute prediction error to help you comparing model predictions with actual values:

# https://twitter.com/neilgcurrie/status/1571607727255834625
mape <- function(actual, predicted) {
sum(abs((actual - predicted) / actual)) / length(actual)
}

### 26.2.5 Exercises

1. Practice turning the following code snippets into functions. Think about what each function does. What would you call it? How many arguments does it need?

mean(is.na(x))
mean(is.na(y))
mean(is.na(z))

x / sum(x, na.rm = TRUE)
y / sum(y, na.rm = TRUE)
z / sum(z, na.rm = TRUE)

round(x / sum(x, na.rm = TRUE) * 100, 1)
round(y / sum(y, na.rm = TRUE) * 100, 1)
round(z / sum(z, na.rm = TRUE) * 100, 1)
2. In the second variant of rescale01(), infinite values are left unchanged. Can you rewrite rescale01() so that -Inf is mapped to 0, and Inf is mapped to 1?

3. Given a vector of birthdates, write a function to compute the age in years.

4. Write your own functions to compute the variance and skewness of a numeric vector. Variance is defined as $\mathrm{Var}(x) = \frac{1}{n - 1} \sum_{i=1}^n (x_i - \bar{x}) ^2 \text{,}$ where $$\bar{x} = (\sum_i^n x_i) / n$$ is the sample mean. Skewness is defined as $\mathrm{Skew}(x) = \frac{\frac{1}{n-2}\left(\sum_{i=1}^n(x_i - \bar x)^3\right)}{\mathrm{Var}(x)^{3/2}} \text{.}$

5. Write both_na(), a function that takes two vectors of the same length and returns the number of positions that have an NA in both vectors.

6. Read the documentation to figure out what the following functions do. Why are they useful even though they are so short?

is_directory <- function(x) file.info(x)\$isdir
is_readable <- function(x) file.access(x, 4) == 0

## 26.3 Data frame functions

Vector functions are useful for pulling out code that’s repeated within dplyr verbs. In this section, you’ll learn how to write “data frame” functions which pull out code that’s repeated across multiple pipelines. These functions work in the same way as dplyr verbs: they takes a data frame as the first argument, some extra arguments that say what to do with it, and usually return a data frame.

### 26.3.1 Indirection and tidy evaluation

When you start writing functions that use dplyr verbs you rapidly hit the problem of indirecation. Let’s illustrate the problem with a very simple function: pull_unique(). The goal of this function is to pull() the unique (distinct) values of a variable:

pull_unique <- function(df, var) {
df |>
distinct(var) |>
pull(var)
}

If we try and use it, we get an error:

diamonds |> pull_unique(clarity)
#> Error in distinct():
#> ! Must use existing variables.
#> ✖ var not found in .data.

To make the problem a bit more clear we can use a made up data frame:

df <- tibble(var = "var", x = "x", y = "y")
df |> pull_unique(x)
#> [1] "var"
df |> pull_unique(y)
#> [1] "var"

Regardless of how we call pull_unique() it always does df |> distinct(var) |> pull(var), instead of df |> distinct(x) |> pull(x) or df |> distinct(y) |> pull(y). This is a problem of indirection, and it arises because dplyr allows you to refer to the names of variables inside your data frame without any special treatment, so called tidy evaluation.

Tidy evaluation is great 95% of the time because it makes our data analyses very concise as you never have to say which data frame a variable comes from; it’s obvious from the context. The downside of tidy evaluation comes when we want to wrap up repeated tidyverse code into a function. Here we need some way tell distinct() and pull() not to treat var as the name of a variable, but instead look inside var for the variable we actually want to use.

Tidy evaluation includes a solution to this problem called embracing. By wrapping a variable in {{ }} (embracing it) we tell dplyr that we want to use the value stored inside variable, not the variable itself. One way to remember what’s happening is to think of {{ }} as looking down a tunnel — it’s going to make the function look inside of var rather than looking for a variable called var.

So to make pull_unique() work we just need to replace var with {{ var }}:

pull_unique <- function(df, var) {
df |>
distinct({{ var }}) |>
pull({{ var }})
}
diamonds |> pull_unique(clarity)
#> [1] SI2  SI1  VS1  VS2  VVS2 VVS1 I1   IF
#> Levels: I1 < SI2 < SI1 < VS2 < VS1 < VVS2 < VVS1 < IF

### 26.3.2 When to embrace?

The art of wrapping tidyverse functions basically figuring out which arguments need to be embraced. Fortunately this is easy because you can look it up from the documentation 😄. There are two terms to look for in the docs:

• Data-masking: this is used in functions like arrange(), filter(), and summarise() which do computation with variables.

• Tidy-selection: this is used for for functions like select(), relocate(), and rename() that select groups of variables.

When you start looking closely at the documentation, you’ll notice that many dplyr functions use …. This is a special shorthand syntax that matches any that aren’t otherwise explicitly matched. For example, arrange() uses data-masking for … and select() uses tidy-select for ….

Your intuition for many common functions should be pretty good — think about whether it’s ok to compute x + 1 or select multiple variables with a:x. There are are some cases that are harder to guess because you usually use them with a single variable, which uses the same syntax for both data-masking or tidy-select:

• The arguments to group_by(), count(), and distinct() are computing arguments because they can all create new variables.

• The names_from arguments to pivot_wider() is a selecting function because you can take the names from multiple variables with names_from = c(x, y, z).

In the next two sections we’ll explore the sorts of handy functions you might write for data-masking and tidy-select arguments

If you commonly perform the same set of summaries when doing initial data exploration, you might consider wrapping them up in a helper function:

summary6 <- function(data, var) {
data |> summarise(
min = min({{ var }}, na.rm = TRUE),
mean = mean({{ var }}, na.rm = TRUE),
median = median({{ var }}, na.rm = TRUE),
max = max({{ var }}, na.rm = TRUE),
n = n(),
n_miss = sum(is.na({{ var }})),
.groups = "drop"
)
}
diamonds |> summary6(carat)
#> # A tibble: 1 × 6
#>     min  mean median   max     n n_miss
#>   <dbl> <dbl>  <dbl> <dbl> <int>  <int>
#> 1   0.2 0.798    0.7  5.01 53940      0

(Whenever you wrap summarise() in a helper, I think it’s good practice to set .groups = "drop" to both avoid the message and leave the data in an ungrouped state.)

The nice thing about this function is because it wraps summary you can used it on grouped data:

diamonds |>
group_by(cut) |>
summary6(carat)
#> # A tibble: 5 × 7
#>   cut         min  mean median   max     n n_miss
#>   <ord>     <dbl> <dbl>  <dbl> <dbl> <int>  <int>
#> 1 Fair       0.22 1.05    1     5.01  1610      0
#> 2 Good       0.23 0.849   0.82  3.01  4906      0
#> 3 Very Good  0.2  0.806   0.71  4    12082      0
#> 4 Premium    0.2  0.892   0.86  4.01 13791      0
#> 5 Ideal      0.2  0.703   0.54  3.5  21551      0

Because the arguments to summarize are data-masking that also means that the var argument to summary6() is data-masking. That means you can also summarize computed variables:

diamonds |>
group_by(cut) |>
summary6(log10(carat))
#> # A tibble: 5 × 7
#>   cut          min    mean  median   max     n n_miss
#>   <ord>      <dbl>   <dbl>   <dbl> <dbl> <int>  <int>
#> 1 Fair      -0.658 -0.0273  0      0.700  1610      0
#> 2 Good      -0.638 -0.133  -0.0862 0.479  4906      0
#> 3 Very Good -0.699 -0.164  -0.149  0.602 12082      0
#> 4 Premium   -0.699 -0.125  -0.0655 0.603 13791      0
#> 5 Ideal     -0.699 -0.225  -0.268  0.544 21551      0

To summarize multiple you’ll need wait until Section 28.2, where you’ll learn how to use across() to repeat the same computation with multiple variables.

Another common helper function is a version of count() that also computes proportions:

# https://twitter.com/Diabb6/status/1571635146658402309
count_prop <- function(df, var, sort = FALSE) {
df |>
count({{ var }}, sort = sort) |>
mutate(prop = n / sum(n))
}
diamonds |> count_prop(clarity)
#> # A tibble: 8 × 3
#>   clarity     n   prop
#>   <ord>   <int>  <dbl>
#> 1 I1        741 0.0137
#> 2 SI2      9194 0.170
#> 3 SI1     13065 0.242
#> 4 VS2     12258 0.227
#> 5 VS1      8171 0.151
#> 6 VVS2     5066 0.0939
#> # … with 2 more rows

Note that this function has three arguments: df, var, and sort, and only var needs to be embraced. var is passed to count() which uses data-masking for all variables in ….

Or maybe you want to find the unique values of a variable for a subset of the data:

unique_where <- function(df, condition, var) {
df |>
filter({{ condition }}) |>
distinct({{ var }}) |>
arrange({{ var }}) |>
pull()
}
nycflights13::flights |>
unique_where(month == 12, dest)
#>  [1] "ABQ" "ALB" "ATL" "AUS" "AVL" "BDL" "BGR" "BHM" "BNA" "BOS" "BQN" "BTV"
#> [13] "BUF" "BUR" "BWI" "BZN" "CAE" "CAK" "CHS" "CLE" "CLT" "CMH" "CVG" "DAY"
#> [25] "DCA" "DEN" "DFW" "DSM" "DTW" "EGE" "EYW" "FLL" "GRR" "GSO" "GSP" "HDN"
#> [37] "HNL" "HOU" "IAD" "IAH" "ILM" "IND" "JAC" "JAX" "LAS" "LAX" "LGB" "MCI"
#> [49] "MCO" "MDW" "MEM" "MHT" "MIA" "MKE" "MSN" "MSP" "MSY" "MTJ" "OAK" "OKC"
#> [61] "OMA" "ORD" "ORF" "PBI" "PDX" "PHL" "PHX" "PIT" "PSE" "PSP" "PVD" "PWM"
#> [73] "RDU" "RIC" "ROC" "RSW" "SAN" "SAT" "SAV" "SBN" "SDF" "SEA" "SFO" "SJC"
#> [85] "SJU" "SLC" "SMF" "SNA" "SRQ" "STL" "STT" "SYR" "TPA" "TUL" "TYS" "XNA"

Here we embrace condition because it’s passed to filter() and var because its passed to distinct() and arrange(). We could also pass it to pull() but it doesn’t actually matter here because there’s only one variable to select.

### 26.3.4 Tidy-select arguments

When it’s common to

# https://twitter.com/drob/status/1571879373053259776
left_join_select <- function(x, y, y_vars = everything(), by = NULL) {
y <- y |> select({{ y_vars }})
left_join(x, y, by = by)
}
left_join_id <- function(x, y, y_vars = everything()) {
y <- y |> select(id, {{ y_vars }})
left_join(x, y, by = "id")
}

Sometimes you want to select variables inside a function that uses data-masking. For example, imagine you want to write count_missing() that counts the number of missing observations in row. You might try writing something like:

count_missing <- function(df, group_vars, x_var) {
df |>
group_by({{ group_vars }}) |>
summarise(n_miss = sum(is.na({{ x_var }})))
}
nycflights13::flights |>
count_missing(c(year, month, day), dep_time)
#> Error in group_by():
#> ℹ In argument: ..1 = c(year, month, day).
#> Caused by error:
#> ! ..1 must be size 336776 or 1, not 1010328.

This doesn’t work because group_by() uses data-masking not tidy-select. We can work around that problem by using pick() which allows you to use use tidy-select insidea data-masking functions:

count_missing <- function(df, group_vars, x_var) {
df |>
group_by(pick({{ group_vars }})) |>
summarise(n_miss = sum(is.na({{ x_var }})))
}
nycflights13::flights |>
count_missing(c(year, month, day), dep_time)
#> summarise() has grouped output by 'year', 'month'. You can override using the
#> .groups argument.
#> # A tibble: 365 × 4
#> # Groups:   year, month [12]
#>    year month   day n_miss
#>   <int> <int> <int>  <int>
#> 1  2013     1     1      4
#> 2  2013     1     2      8
#> 3  2013     1     3     10
#> 4  2013     1     4      6
#> 5  2013     1     5      3
#> 6  2013     1     6      1
#> # … with 359 more rows

Another useful helper is to make a “wide” count, where you make a 2d table of counts. Here we count using all the variables in the rows and columns, and then use pivot_wider() to rearrange:

# Inspired by https://twitter.com/pollicipes/status/1571606508944719876
count_wide <- function(data, rows, cols) {
data |>
count(pick(c({{ rows }}, {{ cols }}))) |>
pivot_wider(
names_from = {{ cols }},
values_from = n,
names_sort = TRUE,
values_fill = 0
)
}
mtcars |> count_wide(vs, cyl)
#> # A tibble: 2 × 4
#>      vs   4   6   8
#>   <dbl> <int> <int> <int>
#> 1     0     1     3    14
#> 2     1    10     4     0
mtcars |> count_wide(c(vs, am), cyl)
#> # A tibble: 4 × 5
#>      vs    am   4   6   8
#>   <dbl> <dbl> <int> <int> <int>
#> 1     0     0     0     0    12
#> 2     0     1     1     3     2
#> 3     1     0     3     4     0
#> 4     1     1     7     0     0

### 26.3.5 Learning more

Once you have the basics under your belt, you can learn more about the full range of tidy evaluation possibilities by reading vignette("programming", package = "dplyr").

## 26.4 Plot functions

You can also use the techniques described above with ggplot2, because aes() is a data-masking function. For example, imagine that you’re making a lot of histograms:

diamonds |>
ggplot(aes(carat)) +
geom_histogram(binwidth = 0.1)

diamonds |>
ggplot(aes(carat)) +
geom_histogram(binwidth = 0.05)

Wouldn’t it be nice if you could wrap this up into a histogram function? This is easy as once you know that aes() is a data-masking function so that you need to embrace:

histogram <- function(df, var, binwidth = NULL) {
df |>
ggplot(aes({{ var }})) +
geom_histogram(binwidth = binwidth)
}

diamonds |> histogram(carat, 0.1)

Note that histogram() returns a ggplot2 plot, so that you can still add on additional components if you want. Just remember to switch from |> to +:

diamonds |>
histogram(carat, 0.1) +
labs(x = "Size (in carats)", y = "Number of diamonds")

### 26.4.1 Other examples

# https://twitter.com/tyler_js_smith/status/1574377116988104704

lin_check <- function(df, x, y) {
df |>
ggplot(aes({{ x }}, {{ y }})) +
geom_point() +
geom_smooth(method = "loess", color = "red", se = FALSE) +
geom_smooth(method = "lm", color = "black", se = FALSE)
}

Of course you might combine both dplyr and ggplot2:

sorted_bars <- function(df, var) {
df |>
mutate({{ var }} := fct_rev(fct_infreq({{ var }}))) |>
ggplot(aes(y = {{ var }})) +
geom_bar()
}
diamonds |> sorted_bars(cut)

Next we’ll discuss two more complicated cases: facetting and automatic labelling.

### 26.4.2 Facetting

Unfortunately facetting is a special challenge, mostly because it was implemented well before we understood what tidy evaluation was and how it should work. And unlike aes(), it wasn’t straightforward to backport to tidy evalution, so you have to use a different syntax to usual. Instead of writing ~ x, you write vars(x) and instead of ~ x + y you write vars(x, y). The only advantage of this syntax is that vars() is data masking so you can embrace within it.

# https://twitter.com/sharoz/status/1574376332821204999

# Facetting is fiddly - have to use special vars syntax.
foo <- function(x) {
ggplot(mtcars) +
aes(x = mpg, y = disp) +
geom_point() +
facet_wrap(vars({{ x }}))
}

I’ve written these functions so that you can supply any data frame, but there are also advantages to hardcoding a data frame, if you’re using it repeatedly:

density <- function(fill, ...) {
palmerpenguins::penguins |>
ggplot(aes(bill_length_mm, fill = {{ fill }})) +
geom_density(alpha = 0.5) +
facet_wrap(vars(...))
}

density()
#> Warning: Removed 2 rows containing non-finite values (stat_density()).
density(species)
#> Warning: Removed 2 rows containing non-finite values (stat_density()).
density(island, sex)
#> Warning: Removed 2 rows containing non-finite values (stat_density()).
#> Warning: Groups with fewer than two data points have been dropped.
#> Warning in max(ids, na.rm = TRUE): no non-missing arguments to max; returning
#> -Inf

Also note that I hardcoded the x variable but allowed the fill to vary.

bars <- function(df, condition, var) {
df |>
filter({{ condition }}) |>
ggplot(aes({{ var }})) +
geom_bar() +
scale_x_discrete(guide = guide_axis(angle = 45))
}

diamonds |> bars(cut == "Good", clarity)

### 26.4.3 Labelling

It’d be nice to label this plot automatically. To do so, we’re going to have to go under the covers of tidy evaluation and use a function from a package we have talked about before: rlang. rlang is the package that implements tidy evaluation, and is used by all the other packages in the tidyverse. rlang provides a helpful function called englue() to solve just this problem. It uses a syntax inspired by glue but combined with embracing:

histogram <- function(df, var, binwidth = NULL) {
label <- rlang::englue("A histogram of {{var}} with binwidth {binwidth}")

df |>
ggplot(aes({{ var }})) +
geom_histogram(binwidth = binwidth) +
labs(title = label)
}

diamonds |> histogram(carat, 0.1)

(Note that if you omit the binwidth the function fails with a weird error. That appears to be a bug in englue(): https://github.com/r-lib/rlang/issues/1492. Hopefully it’ll be fixed soon!)

You can use the same approach any other place that you might supply a string in a ggplot2 plot.

It’s hard to create general purpose plotting functions because you need to consider many different situations, and we haven’t given you the programming skills to handle them all. Fortunately, in most cases it’s relatively simple to extract repeated plotting code into a function. So, for now, strive to keep your functions simple, focussing on concrete repetition, not solve imaginary future problems.

You can also learn other techniques in https://ggplot2-book.org/programming.html.

## 26.5 Style

It’s important to remember that functions are not just for the computer, but are also for humans. R doesn’t care what your function is called, or what comments it contains, but these are important for human readers. This section discusses some things that you should bear in mind when writing functions that humans can understand.

The name of a function is important. Ideally, the name of your function will be short, but clearly evoke what the function does. That’s hard! But it’s better to be clear than short, as RStudio’s autocomplete makes it easy to type long names.

Generally, function names should be verbs, and arguments should be nouns. There are some exceptions: nouns are ok if the function computes a very well known noun (i.e. mean() is better than compute_mean()), or accessing some property of an object (i.e. coef() is better than get_coefficients()). A good sign that a noun might be a better choice is if you’re using a very broad verb like “get”, “compute”, “calculate”, or “determine”. Use your best judgement and don’t be afraid to rename a function if you figure out a better name later.

# Too short
f()

# Not a verb, or descriptive
my_awesome_function()

# Long, but clear
impute_missing()
collapse_years()

In terms of white space, continue to follow the rules from Chapter 7. Additionally, function should always be followed by squiggly brackets ({}), and the contents should be indented by an additional two spaces. This makes it easier to see the hierarchy in your code by skimming the left-hand margin.

# missing extra two spaces
pull_unique <- function(df, var) {
df |>
distinct({{ var }}) |>
pull({{ var }})
}

# Pipe indented incorrectly
pull_unique <- function(df, var) {
df |>
distinct({{ var }}) |>
pull({{ var }})
}

# Missing {} and all one line
pull_unique <- function(df, var) df |> distinct({{ var }}) |> pull({{ var }})

As you can see from the example we recommend putting extra spaces inside of {{ }}. This makes it super obvious that something unusual is happening.

### 26.5.1 Exercises

1. Read the source code for each of the following two functions, puzzle out what they do, and then brainstorm better names.

f1 <- function(string, prefix) {
substr(string, 1, nchar(prefix)) == prefix
}
f3 <- function(x, y) {
rep(y, length.out = length(x))
}
2. Take a function that you’ve written recently and spend 5 minutes brainstorming a better name for it and its arguments.

3. Make a case for why norm_r(), norm_d() etc would be better than rnorm(), dnorm(). Make a case for the opposite.

## 26.6 Summary

In this chapter you learned how to write functions for three useful scenarios: creating a vector, creating a data frames, or creating a plot.

Writing functions to create data frames and plots using the tidyverse required you to learn a little about tidy evaluation. Tidy evaluation is really important, because its what allows you to write diamonds |> filter(x == y) and filter() knows to use x and y from the diamonds dataset. The downside of tidy evaluation is that you need to learn a new technique for programming: embracing. Embracing, e.g. {{ x }}, tells the tidy-evaluation using function to look inside the argument x, rather than using the literal variable x. You can figure out when you need to use embracing by looking in the documentation for the terms for the two major styles of tidyselect: “data masking” and “tidy select”.

In the next chapter, we’ll dive into some of the details of R’s vector data structures that we’ve omitted so far. These are immediately useful by themselves, but are a necessary foundation for the following chapter on iteration that provides some amazingly powerful tools.