class: inverse, middle, left, my-title-slide, title-slide .title[ # Generalized Additive Models ] .subtitle[ ## a data-driven approach to estimating regression models ] .author[ ### Gavin Simpson ] .institute[ ### Department of Animal & Veterinary Sciences · Aarhus University ] .date[ ### 1400–2000 CET (1300–1900 UTC) Monday 20th January, 2025 ] --- class: inverse middle center big-subsection # Welcome ??? --- # Logistics ## Slides Slidedeck: [bit.ly/physalia-gam-1](https://bit.ly/physalia-gam-1) Sources: [bit.ly/physalia-gam](https://bit.ly/physalia-gam) Direct download a ZIP of everything: [bit.ly/physalia-gam-zip](https://bit.ly/physalia-gam-zip) Unpack the zip & remember where you put it --- # Today's topics * Brief overview of R and the Tidyverse packages we’ll encounter throughout the course * Recap generalised linear models * Fitting your first GAM --- class: inverse middle center subsection # GLMs --- # Generalized linear models Generalised linear models (GLMs) are an extension of linear regression plus Poisson, logistic and other regression models GLMs extend the types of data and error distributions that can be modelled beyond the Gaussian data of linear regression With GLMs we can model count data, binary/presence absence data, and concentration data where the response variable is not continuous. Such data have different mean-variance relationships and we would not expect errors to be Gaussian. --- # Generalized linear models Typical uses of GLMs in ecology are - Poisson GLM for count data - Logistic GLM for presence absence data - Gamma GLM for non-negative or positive continuous data GLMs can handle many problems that appear non-linear Not necessary to transform data as this is handled as part of the GLM process --- # Binomial distribution * For a fixed number of trials (*n*), * fixed probability of “success” (*p*), & * two outcomes per trial (heads or tails) Flip a coin 10 times with *p* = 0.7, the probability of 7 heads is `\(\sim Bin(n = 10, p = 0.7)\)`, ~ 0.27 ``` r dbinom(x = 7, size = 10, prob = 0.7) ``` ``` ## [1] 0.2668279 ``` --- # Binomial distribution <!-- --> --- # Poisson distribution The Poisson gives the distribution of the number of “things” (individuals, events, counts) in a given sampling interval/effort if each event is **independent**. Has a single parameter `\(\lambda\)` the average density or arrival rate --- # Poisson distribution <!-- --> --- # The structure of a GLM A GLM consists of three components, chosen/specified by the user .small[ 1. A **Random component**, specifying the conditional distribution of of the response `\(y_i\)` given the values of the explanatory data 2. A **Linear Predictor** `\(\eta\)` — the linear function of regressors `$$\eta_i = \alpha + \beta_1 x_{i1} + \beta_2 x_{i2} + \cdots + \beta_k x_{ik}$$` The `\(x_{ij}\)` are prescribed functions of the explanatory variables and can be transformed variables, dummy variables, polynomial terms, interactions etc. 3. A smooth and invertible **Link Function** `\(g(\cdot)\)`, which transforms the expectation of the response `\(\mu_i \equiv \mathbb{E}(y_i)\)` to the linear predictor `$$g(\mu_i) = \eta_i = \alpha + \beta_1 x_{i1} + \beta_2 x_{i2} + \cdots + \beta_k x_{ik}$$` As `\(g(\cdot)\)` is invertible, we can write `$$\mu_i = g^{-1}(\eta_i) = g^{-1}(\alpha + \beta_1 x_{i1} + \beta_2 x_{i2} + \cdots + \beta_k x_{ik})$$` ] --- # Conditional distribution of `\(y_i\)` Originally GLMs were specified for error distribution functions belonging to the *exponential family* of probability distributions - Continuous probability distributions - Gaussian (or normal distribution; used in linear regression) - Gamma (data with constant coefficient of variation) - Exponential (time to death, survival analysis) - Chi-square - Inverse-Gaussian - Discrete probability distributions - Poisson (count data) - Binomial (0/1 data, counts from a total) - Multinomial Choice depends on range of `\(y_i\)` and on the relationship between the variance and the expectation of `\(y_i\)` — *mean-variance relationship* --- # Conditional distribution of `\(y_i\)` Characteristics of common GLM probability distributions | | Canonical Link | Range of `\(Y_i\)` | Variance function | |------------------|----------------|------------------------------|--------------------------------| | Gaussian | Identity | `\((-\infty, +\infty)\)` | `\(\phi\)` | | Poisson | Log | `\(0,1,2,\ldots,\infty\)` | `\(\mu_i\)` | | Binomial | Logit | `\(\frac{0,1,\ldots,n_i}{n_i}\)` | `\(\frac{\mu_i(1 - \mu_i)}{n_i}\)` | | Gamma | Inverse | `\((0, \infty)\)` | `\(\phi \mu_i^2\)` | | Inverse-Gaussian | Inverse-square | `\((0, \infty)\)` | `\(\phi \mu_i^3\)` | `\(\phi\)` is the dispersion parameter; `\(\mu_i\)` is the expectation of `\(y_i\)`. In the binomial family, `\(n_i\)` is the number of trials --- # Common probability distributions Gaussian distribution is rarely adequate in ecology; GLMs offer ecologically meaningful alternatives .small[ - **Poisson** — counts; integers, non-negative, variance increases with mean - **Binomial** — observed proportions from a total; integers, non-negative, bounded at 0 and 1, variance largest at `\(\pi = 0.5\)` - **Binomial** — presence absence data; discrete values, 0 and 1, models probability of success - **Gamma** — concentrations; non-negative (strictly positive with log link) real values, variance increases with mean, many zero values and some high values ] --- # Old notation Wrote linear model as `$$y_i = \alpha + \beta_1 x_{1i} + \beta_2 x_{2i} + \cdots + \beta_j x_{ij} + \varepsilon_i$$` And assumed $$ \varepsilon_i \sim \text{Normal}(0, \sigma^2) $$ This doesn't work out the same for GLMs — we don't have residuals in the linear predictor Sampling variation comes from the response distribution --- # New notation Rewrite linear model as `\begin{align*} y_i & \sim \text{Normal}(\mu_i, \sigma^2) \\ \mu_i = \eta_i & = \alpha + \beta_1 x_{1i} + \beta_2 x_{2i} + \cdots + \beta_j x_{ij} \end{align*}` This now matches the general form for the GLM `\begin{align*} y_i & \sim \text{EF}(\mu_i, \boldsymbol{\theta}) \\ g(\mu_i) & = \alpha + \beta_1 x_{1i} + \beta_2 x_{2i} + \cdots + \beta_j x_{ij} \end{align*}` --- # New notation <img src="resources/fig-gaussian-lm-descriptive-figure-1.png" width="80%" style="display: block; margin: auto;" /> --- # New notation Binomial GLM `\begin{align*} y_i & \sim \text{Binomial}(n, p_i) \\ \text{logit}(p_i) & = \alpha + \beta_1 x_{1i} + \beta_2 x_{2i} + \cdots + \beta_j x_{ij} \end{align*}` Poisson GLM `\begin{align*} y_i & \sim \text{Poisson}(\lambda_i) \\ \log(\lambda_i) & = \alpha + \beta_1 x_{1i} + \beta_2 x_{2i} + \cdots + \beta_j x_{ij} \end{align*}` --- # New notation <img src="resources/fig-poisson-lm-descriptive-figure-1.png" width="80%" style="display: block; margin: auto;" /> --- # New notation <img src="resources/fig-other-dist-descr-1.png" width="80%" style="display: block; margin: auto;" /> --- class: inverse middle center subsection # Examples --- # Logistic regression — *Darlingtonia* Timed censuses at 42 randomly-chosen leaves of the cobra lily .small[ * Recorded number of wasp visits at 10 of the 42 leaves * Test hypothesis that the probability of visitation is related to leaf height * Response is dichotomous variable (0/1) * A suitable model is the logistic model `$$\pi = \frac{e^{\beta_0 + \beta_i X}}{1 + e^{\beta_0 + \beta_1 X_i}}$$` * The logit transformation produces `$$\log_e \left( \frac{\pi}{1-\pi} \right) = \beta_0 + \beta_1 X_i$$` * This is the logistic regression and it is a special case of the GLM, with a binomial random component and the logit link function ] --- # Logistic regression — *Darlingtonia* `$$\log_e \left( \frac{\pi}{1-\pi} \right) = \beta_0 + \beta_1 X_i$$` .small[ * `\(\beta_0\)` is a type of intercept; determines the probability of success ( `\(y_i = 1\)` ) `\(\pi\)` where X = 0 * If `\(\beta_0 = 0\)` then `\(\pi = 0.5\)` * `\(\beta_1\)` is similar to the slope and determines how steeply the fitted logistic curve rises to the maximum value of `\(\pi = 1\)` * Together, `\(\beta_0\)` and `\(\beta_1\)` specify the range of the `\(X\)` variable over which most of the rise occurs and determine how quickly the probability rises from 0 to 1 * Estimate the model parameters using **Maximum Likelihood**; find parameter values that make the observed data most probable ] --- # Link function <img src="index_files/figure-html/logit-transformation-1.svg" style="display: block; margin: auto;" /> --- # Logistic regression — *Darlingtonia* ``` r wasp <- read_csv(here("data", "darlingtonia.csv"), comment = "#", col_types = "dl") ``` --- # Logistic regression — *Darlingtonia* ``` r m <- glm(visited ~ leafHeight, data = wasp, family = binomial) m ``` ``` ## ## Call: glm(formula = visited ~ leafHeight, family = binomial, data = wasp) ## ## Coefficients: ## (Intercept) leafHeight ## -7.2930 0.1154 ## ## Degrees of Freedom: 41 Total (i.e. Null); 40 Residual ## Null Deviance: 46.11 ## Residual Deviance: 26.96 AIC: 30.96 ``` --- # Logistic regression — *Darlingtonia* .smaller[ ``` r summary(m) ``` ``` ## ## Call: ## glm(formula = visited ~ leafHeight, family = binomial, data = wasp) ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -7.29295 2.16081 -3.375 0.000738 *** ## leafHeight 0.11540 0.03655 3.158 0.001591 ** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for binomial family taken to be 1) ## ## Null deviance: 46.105 on 41 degrees of freedom ## Residual deviance: 26.963 on 40 degrees of freedom ## AIC: 30.963 ## ## Number of Fisher Scoring iterations: 6 ``` ] --- # Logistic regression — *Darlingtonia* .row[ .col-6[ .smaller[ ``` r # data to predict at pdat <- with(wasp, tibble(leafHeight = seq(min(leafHeight), max(leafHeight), length = 100))) # predict pred <- predict(m, pdat, type = "link", se.fit = TRUE) ilink <- family(m)$linkinv # g-1() pdat <- pdat %>% bind_cols(data.frame(pred)) %>% mutate(fitted = ilink(fit), upper = ilink(fit + (2 * se.fit)), lower = ilink(fit - (2 * se.fit))) # plot ggplot(wasp, aes(x = leafHeight, y = as.numeric(visited))) + geom_point() + geom_ribbon(aes(ymin = lower, ymax = upper, x = leafHeight), data = pdat, inherit.aes = FALSE, alpha = 0.2) + geom_line(data = pdat, aes(y = fitted)) + labs(x = "Leaf Height [cm]", y = "Probability of visitation") ``` ] ] .col-6[ <!-- --> ] ] --- # Wald statistics `\(z\)` values are Wald statistics, which under the null hypothesis are *asymptotically* standard normal | | Estimate| Std. Error| z value| Pr(>|z|)| |:-----------|--------:|----------:|-------:|------------------:| |(Intercept) | -7.2930| 2.1608| -3.3751| 0.0007| |leafHeight | 0.1154| 0.0365| 3.1575| 0.0016| <br /> Tests the null hypothesis that `\(\beta_i = 0\)` `$$z = \hat{\beta}_i / \mathrm{SE}(\hat{\beta}_i)$$` --- # Deviance .small[ * In least squares we have the residual sum of squares as the measure of lack of fitted * In GLMs, **deviance** plays the same role * Deviance is defined as twice the log likelihood of the observed data under the current model * Deviance is defined relative to an arbitrary constant — only **differences** of deviances have any meaning * Differences in deviances are also known as ratios of likelihoods * An alternative to the Wald tests are deviance ratio or likelihood ratio tests `$$F = \frac{(D_a - D_b) / (\mathsf{df}_a - \mathsf{df}_b)}{D_b / \mathsf{df}_b}$$` * `\(D_j\)` deviance of model, where we test if model A is a significant improvement over model B; `\(\mathsf{df}_k\)` are the degrees of freedom of the respective model ] --- # A Gamma GLM — simple age-depth modelling Radiocarbon age estimates from depths within a peat bog (Brew & Maddy, 1995, QRA Technical Guide 5) Estimate accumulation rate; assumption here is linear accumulation Uncertainty or error is greater at depth; mean variance relationship Fit mid-depth & mid-calibrated age points --- # A Gamma GLM — simple age-depth modelling ``` r maddy <- read_csv(here("data", "maddy-peat.csv"), col_types = "cdddddd") maddy <- mutate(maddy, midDepth = upperDepth - (0.5 * abs(upperDepth - lowerDepth)), calMid = calUpper - (0.5 * abs(calUpper - calLower))) maddy ``` ``` ## # A tibble: 12 × 9 ## Sample upperDepth lowerDepth ageBP ageError calUpper calLower midDepth calMid ## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> ## 1 SRR-4… 20 22 355 35 509 307 19 408 ## 2 SRR-4… 26 28 465 35 542 480 25 511 ## 3 SRR-4… 32 34 635 35 671 545 31 608 ## 4 SRR-4… 38 40 740 35 732 666 37 699 ## 5 SRR-4… 44 46 865 35 916 691 43 804. ## 6 SRR-4… 50 52.5 870 35 918 692 48.8 805 ## 7 SRR-4… 56 58 985 35 967 795 55 881 ## 8 SRR-4… 100 108 1270 35 1284 1097 96 1190. ## 9 SRR-4… 200 207 2575 35 2761 2558 196. 2660. ## 10 SRR-4… 260 268 3370 35 3697 3487 256 3592 ## 11 SRR-4… 400 407 4675 35 5563 5306 396. 5434. ## 12 SRR-4… 493 500 5315 35 6263 5955 490. 6109 ``` --- # A Gamma GLM — simple age-depth modelling .row[ .col-6[ ``` r ggplot(maddy, aes(x = midDepth, y = calMid)) + geom_point() + labs(y = "Calibrated Age", x = "Depth") ``` ] .col-6[ <!-- --> ] ] --- # A Gamma GLM .smaller[ ``` r m_gamma <- glm(calMid ~ midDepth, data = maddy, family = Gamma(link = "identity")) summary(m_gamma) ``` ``` ## ## Call: ## glm(formula = calMid ~ midDepth, family = Gamma(link = "identity"), ## data = maddy) ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 197.2909 22.5603 8.745 5.35e-06 *** ## midDepth 12.5799 0.4543 27.693 8.74e-11 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for Gamma family taken to be 0.004612561) ## ## Null deviance: 10.439047 on 11 degrees of freedom ## Residual deviance: 0.048316 on 10 degrees of freedom ## AIC: 145.57 ## ## Number of Fisher Scoring iterations: 4 ``` ] --- # A Gamma GLM .row[ .col-6[ .smaller[ ``` r # data to predict at pdat <- with(maddy, tibble(midDepth = seq(min(midDepth), max(midDepth), length = 100))) # predict p_gamma <- predict(m_gamma, pdat, type = "link", se.fit = TRUE) ilink <- family(m_gamma)$linkinv # confidence interval p_gamma <- pdat %>% bind_cols(data.frame(p_gamma)) %>% mutate(fitted = ilink(fit), upper = ilink(fit + (2 * se.fit)), lower = ilink(fit - (2 * se.fit))) # plot p1 <- ggplot(maddy, aes(x = midDepth, y = calMid)) + geom_ribbon(aes(ymin = lower, ymax = upper, x = midDepth), data = p_gamma, inherit.aes = FALSE, alpha = 0.2) + geom_line(data = p_gamma, aes(y = fitted)) + geom_point() + labs(y = "Calibrated Age", x = "Depth", title = "Gamma GLM") p1 ``` ] ] .col-6[ <!-- --> ] ] --- # A Gaussian GLM .row[ .col-6[ .smaller[ ``` r # fit gaussian GLM m_gaus <- glm(calMid ~ midDepth, data = maddy, family = gaussian) # predict p_gaus <- predict(m_gaus, pdat, type = "link", se.fit = TRUE) ilink <- family(m_gaus)$linkinv # prep confidence interval p_gaus <- pdat %>% bind_cols(data.frame(p_gaus)) %>% mutate(fitted = ilink(fit), upper = ilink(fit + (2 * se.fit)), lower = ilink(fit - (2 * se.fit))) # plot p2 <- ggplot(maddy, aes(x = midDepth, y = calMid)) + geom_ribbon(aes(ymin = lower, ymax = upper, x = midDepth), data = p_gaus, inherit.aes = FALSE, alpha = 0.2) + geom_line(data = p_gaus, aes(y = fitted)) + geom_point() + labs(y = "Calibrated Age", x = "Depth", title = "Linear Model") p2 ``` ] ] .col-6[ <!-- --> ] ] --- ``` r library("patchwork") p1 + p2 ``` <!-- --> --- # Transform or GLM? ``` m1 <- glm(log(y) ~ x, data = my_data, family = Gamma(link = "identity")) m2 <- glm(y ~ x, data = my_data, family = Gamma(link = "log")) ``` These models are different 1. `m1` is a model for `\(\mathbb{E}(\log(y_i))\)` 2. `m2` is a model for `\(\mathbb{E}(y_i)\)` $$ \log(\mathbb{E}(y_i)) \neq \mathbb{E}(\log(y_i)) $$ Jensen's inequality --- # Biological control of diamondback moths .small[ Uefune et al (2020) studied the use of synthetic herbivory-induced plant volatiles (HIPVs) to attract larval parasitoid wasps (*Cotesia vestalis*) to control diamondback moths (DBM: *Plutella xylostella*), a global pest of cruciferous vegetables, in greenhouses growing mizuna (Japanese mustard) in Japan. They used two groups of greenhouses, the treated group having dispensers for the HIPVs as well as honeyfeeders to attract *C. vestalis* and a second untreated group. In each greenhouse, a single sticky trap, replaced weekly over 6 months, was used to catch both DBMs and *C. vestalis* and the numbers of both counted. We will model numbers of *C. vestalis* against numbers of DBM and treatment using each trap as the units of analysis. The study was done in 2006 and 2008 but we only analyze the 2008 data. ] ??? While greenhouse ID could have been included as a random effect in a mixed model analysis, the available data did not record individual greenhouses. --- # Biological control of diamondback moths > We will model numbers of *C. vestalis* against numbers of DBM and treatment using each trap as the units of analysis. What distribution would you expect the response variable `parasitoid` to follow? --- # Diamondback moths — data ``` r moth <- readr::read_csv("data/uefunex.csv") ``` --- # Diamondback moths — summary stats ``` r library("dplyr") moth |> group_by(treatment) |> summarise(n = n(), mean = mean(parasitoid), median = median(parasitoid), sd = sd(parasitoid)) ``` ``` ## # A tibble: 2 × 5 ## treatment n mean median sd ## <chr> <int> <dbl> <dbl> <dbl> ## 1 control 177 1.10 0 4.06 ## 2 treated 242 0.938 0 2.87 ``` --- # Diamondback moths — plot data ``` r library("ggplot2") moth |> ggplot(aes(x = treatment, y = parasitoid)) + geom_violin(aes(fill = treatment)) ``` <img src="index_files/figure-html/plot-moth-1-1.svg" width="80%" style="display: block; margin: auto;" /> --- # Diamondback moths ``` r moth |> ggplot(aes(x = treatment, y = parasitoid)) + geom_violin(aes(fill = treatment)) + scale_y_sqrt() ``` <img src="index_files/figure-html/plot-moth-sqrt-1.svg" width="80%" style="display: block; margin: auto;" /> --- # Diamondback moths ``` r # install.packages("ggforce") moth |> ggplot(aes(x = treatment, y = parasitoid)) + geom_violin(aes(fill = treatment)) + scale_y_continuous(trans = ggforce::power_trans((1/4))) ``` <img src="index_files/figure-html/plot-moth-root-root-1.svg" width="70%" style="display: block; margin: auto;" /> --- # Biological control of diamondback moths What do you conclude about the response variable? What distribution would you expect the response variable parasitoid to follow? --- # Fit Poisson GLM ``` r moth_glm1 <- glm(parasitoid ~ moth + treatment + moth:treatment, family = poisson, data = moth) ``` --- # Poisson GLM summary ``` r summary(moth_glm1) ``` ``` ## ## Call: ## glm(formula = parasitoid ~ moth + treatment + moth:treatment, ## family = poisson, data = moth) ## ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -0.30106 0.08804 -3.420 0.000627 *** ## moth 0.22652 0.01433 15.804 < 2e-16 *** ## treatmenttreated -0.13617 0.11899 -1.144 0.252463 ## moth:treatmenttreated 0.18471 0.02597 7.111 1.15e-12 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for poisson family taken to be 1) ## ## Null deviance: 1600.0 on 418 degrees of freedom ## Residual deviance: 1220.5 on 415 degrees of freedom ## AIC: 1551.3 ## ## Number of Fisher Scoring iterations: 6 ``` --- # Analysis of deviance table ``` r anova(moth_glm1, test = "LRT") ``` ``` ## Analysis of Deviance Table ## ## Model: poisson, link: log ## ## Response: parasitoid ## ## Terms added sequentially (first to last) ## ## ## Df Deviance Resid. Df Resid. Dev Pr(>Chi) ## NULL 418 1600.0 ## moth 1 328.17 417 1271.9 < 2.2e-16 *** ## treatment 1 5.78 416 1266.1 0.01623 * ## moth:treatment 1 45.62 415 1220.5 1.434e-11 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ``` --- # Plot the estimated model .row[ .col-6[ .small[ ``` r moth |> ggplot(aes(y = parasitoid, x = moth, color = treatment)) + geom_jitter(stat = "identity", width = 0.05, height = 0.05) + geom_smooth(method = "glm", method.args = list(family = "poisson")) + theme(legend.position = "bottom") ``` ] ] .col-6[ <!-- --> ] ] --- # Problems ``` r library("dplyr") moth |> group_by(treatment) |> summarise(n = n(), mean = mean(parasitoid), median = median(parasitoid), sd = sd(parasitoid)) ``` ``` ## # A tibble: 2 × 5 ## treatment n mean median sd ## <chr> <int> <dbl> <dbl> <dbl> ## 1 control 177 1.10 0 4.06 ## 2 treated 242 0.938 0 2.87 ``` --- # Overdispersion ``` r presid <- resid(moth_glm1, type = "pearson") n <- nrow(moth) params <- length(coef(moth_glm1)) disp <- sum(presid^2) / (n - params) disp ``` ``` ## [1] 7.711486 ``` --- # Overdispersion .small[ ``` r moth_glm2 <- glm(parasitoid ~ moth + treatment + moth:treatment, family = quasipoisson, data = moth) summary(moth_glm2) ``` ``` ## ## Call: ## glm(formula = parasitoid ~ moth + treatment + moth:treatment, ## family = quasipoisson, data = moth) ## ## Coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.30106 0.24450 -1.231 0.2189 ## moth 0.22652 0.03981 5.691 2.4e-08 *** ## treatmenttreated -0.13617 0.33046 -0.412 0.6805 ## moth:treatmenttreated 0.18471 0.07214 2.561 0.0108 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## (Dispersion parameter for quasipoisson family taken to be 7.71271) ## ## Null deviance: 1600.0 on 418 degrees of freedom ## Residual deviance: 1220.5 on 415 degrees of freedom ## AIC: NA ## ## Number of Fisher Scoring iterations: 6 ``` ] --- # Negative binomial .small[ ``` r library("mgcv") moth_glm3 <- gam(parasitoid ~ moth + treatment + moth:treatment, family = nb(), method = "ML", data = moth) summary(moth_glm3) ``` ``` ## ## Family: Negative Binomial(0.263) ## Link function: log ## ## Formula: ## parasitoid ~ moth + treatment + moth:treatment ## ## Parametric coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -0.7326 0.1897 -3.861 0.000113 *** ## moth 0.5445 0.0731 7.448 9.44e-14 *** ## treatmenttreated 0.1632 0.2460 0.663 0.507015 ## moth:treatmenttreated 0.0400 0.1379 0.290 0.771761 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## ## R-sq.(adj) = -116 Deviance explained = 22.5% ## -ML = 458.89 Scale est. = 1 n = 419 ``` ] --- # Negative binomial ``` r library("glmmTMB") moth_glm4 <- glmmTMB(parasitoid ~ moth + treatment + moth:treatment, family = nbinom2("log"), REML = FALSE, data = moth) summary(moth_glm4) ``` ``` ## Family: nbinom2 ( log ) ## Formula: parasitoid ~ moth + treatment + moth:treatment ## Data: moth ## ## AIC BIC logLik deviance df.resid ## 927.8 948.0 -458.9 917.8 414 ## ## ## Dispersion parameter for nbinom2 family (): 0.263 ## ## Conditional model: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -0.7326 0.2175 -3.369 0.000754 *** ## moth 0.5445 0.1543 3.529 0.000417 *** ## treatmenttreated 0.1632 0.2698 0.605 0.545237 ## moth:treatmenttreated 0.0400 0.2195 0.182 0.855416 ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ``` --- # Visualising the model fit??? ``` r library("marginaleffects") moth_glm4 |> plot_predictions(condition = c("moth", "treatment"), vcov = TRUE, type = "response") ``` <!-- --> --- class: inverse middle center huge-subsection # GAMs --- class: inverse middle center subsection # Motivating example --- # HadCRUT4 time series <!-- --> ??? Hadley Centre NH temperature record ensemble How would you model the trend in these data? --- # (Generalized) Linear Models `$$y_i \sim \mathcal{N}(\mu_i, \sigma^2)$$` `$$\mu_i = \beta_0 + \beta_1 \mathtt{year}_{i} + \beta_2 \mathtt{year}^2_{1i} + \cdots + \beta_j \mathtt{year}^j_{i}$$` Assumptions 1. linear effects of covariates are good approximation of the true effects 2. conditional on the values of covariates, `\(y_i | \mathbf{X} \sim \mathcal{N}(\mu_i, \sigma^2)\)` 3. this implies all observations have the same *variance* 4. `\(y_i | \mathbf{X} = \mathbf{x}\)` are *independent* An **additive** model addresses the first of these --- class: inverse center middle subsection # Why bother with anything more complex? --- # Is this linear? <!-- --> --- # Polynomials perhaps… <!-- --> --- # Polynomials perhaps… We can keep on adding ever more powers of `\(\boldsymbol{x}\)` to the model — model selection problem **Runge phenomenon** — oscillations at the edges of an interval — means simply moving to higher-order polynomials doesn't always improve accuracy --- class: inverse middle center subsection # GAMs offer a solution --- # HadCRUT data set ``` r library('readr') library('dplyr') URL <- "https://bit.ly/hadcrutv4" gtemp <- read_table(URL, col_types = 'nnnnnnnnnnnn', col_names = FALSE) %>% select(num_range('X', 1:2)) %>% setNames(nm = c('Year', 'Temperature')) ``` [File format](https://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/series_format.html) --- # HadCRUT data set ``` r gtemp ``` ``` ## # A tibble: 172 × 2 ## Year Temperature ## <dbl> <dbl> ## 1 1850 -0.336 ## 2 1851 -0.159 ## 3 1852 -0.107 ## 4 1853 -0.177 ## 5 1854 -0.071 ## 6 1855 -0.19 ## 7 1856 -0.378 ## 8 1857 -0.405 ## 9 1858 -0.4 ## 10 1859 -0.215 ## # ℹ 162 more rows ``` --- # Fitting a GAM ``` r library('mgcv') m <- gam(Temperature ~ s(Year), data = gtemp, method = 'REML') summary(m) ``` .smaller[ ``` ## ## Family: gaussian ## Link function: identity ## ## Formula: ## Temperature ~ s(Year) ## ## Parametric coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) -0.020477 0.009731 -2.104 0.0369 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## Approximate significance of smooth terms: ## edf Ref.df F p-value ## s(Year) 7.837 8.65 145.1 <2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ## R-sq.(adj) = 0.88 Deviance explained = 88.6% ## -REML = -91.237 Scale est. = 0.016287 n = 172 ``` ] --- # Fitted GAM <!-- --> --- class: inverse middle center big-subsection # GAMs --- # Generalized Additive Models <br />  .references[Source: [GAMs in R by Noam Ross](https://noamross.github.io/gams-in-r-course/)] ??? GAMs are an intermediate-complexity model * can learn from data without needing to be informed by the user * remain interpretable because we can visualize the fitted features --- # How is a GAM different? `$$\begin{align*} y_i &\sim \mathcal{D}(\mu_i, \theta) \\ \mathbb{E}(y_i) &= \mu_i = g(\eta_i)^{-1} \end{align*}$$` We model the mean of data as a sum of linear terms: `$$\eta_i = \beta_0 +\sum_j \color{red}{ \beta_j x_{ji}}$$` A GAM is a sum of _smooth functions_ or _smooths_ `$$\eta_i = \beta_0 + \sum_j \color{red}{f_j(x_{ji})}$$` --- # How did `gam()` *know*? <!-- --> --- class: inverse background-image: url('./resources/rob-potter-398564.jpg') background-size: contain # What magic is this? .footnote[ <a style="background-color:black;color:white;text-decoration:none;padding:4px 6px;font-family:-apple-system, BlinkMacSystemFont, "San Francisco", "Helvetica Neue", Helvetica, Ubuntu, Roboto, Noto, "Segoe UI", Arial, sans-serif;font-size:12px;font-weight:bold;line-height:1.2;display:inline-block;border-radius:3px;" href="https://unsplash.com/@robpotter?utm_medium=referral&utm_campaign=photographer-credit&utm_content=creditBadge" target="_blank" rel="noopener noreferrer" title="Download free do whatever you want high-resolution photos from Rob Potter"><span style="display:inline-block;padding:2px 3px;"><svg xmlns="http://www.w3.org/2000/svg" style="height:12px;width:auto;position:relative;vertical-align:middle;top:-1px;fill:white;" viewBox="0 0 32 32"><title></title><path d="M20.8 18.1c0 2.7-2.2 4.8-4.8 4.8s-4.8-2.1-4.8-4.8c0-2.7 2.2-4.8 4.8-4.8 2.7.1 4.8 2.2 4.8 4.8zm11.2-7.4v14.9c0 2.3-1.9 4.3-4.3 4.3h-23.4c-2.4 0-4.3-1.9-4.3-4.3v-15c0-2.3 1.9-4.3 4.3-4.3h3.7l.8-2.3c.4-1.1 1.7-2 2.9-2h8.6c1.2 0 2.5.9 2.9 2l.8 2.4h3.7c2.4 0 4.3 1.9 4.3 4.3zm-8.6 7.5c0-4.1-3.3-7.5-7.5-7.5-4.1 0-7.5 3.4-7.5 7.5s3.3 7.5 7.5 7.5c4.2-.1 7.5-3.4 7.5-7.5z"></path></svg></span><span style="display:inline-block;padding:2px 3px;">Rob Potter</span></a> ] --- class: inverse background-image: url('resources/wiggly-things.png') background-size: contain