Overview

Dollarizing emotional payoff (MRE), Dystopian Catharsis delivers about +$2.23M per film (PaM) during financial stress, while box-office staples like Heroic Escapism sit closet to (~+$1.09M)—evidence that when wallets tighten, catharsis and comfort win, and oversupplying heavy tones dilutes yield even when they “feel” successful. Across tones, Average Gross Estimate (AGE)—the predicted five-week gross lift under stress for a typical film—ranges from roughly +$0.3M to +$9.0M (e.g., Dystopian Catharsis ≈ +$9.0M vs. Heroic Escapism ≈ +$4.3M). Controlling for release scale and sentiment, emotional valence explains meaningful variance in five-week real gross during high-stress months.

1. Data Ingestion & Cleaning

We performed data ingestion and cleaning by reading the master CSV, standardizing column names, coercing types, and filtering out missing values to ensure dataset consistency—establishing a clean dataset for analysis.

raw_df <- read_csv("/Users/jasonclark/Downloads/master_movie_datav1.csv", show_col_types = FALSE) %>%
  clean_names()

df <- raw_df %>%
  mutate(
    release_date         = mdy(release_date),
    sentiment_at_release = as.numeric(sentiment_at_release),
    nber                 = as.integer(nber),
    five_week_real_gross = parse_number(x5_week_real_gross),
    opening_theaters     = as.numeric(opening_theaters)
  ) %>%
  filter(!is.na(five_week_real_gross))

# Preview
kable(
  df %>% select(year, title, five_week_real_gross, opening_theaters) %>% slice_head(n = 5),
  caption = "Sample of cleaned box office data",
  booktabs = TRUE
)
Sample of cleaned box office data
year title five_week_real_gross opening_theaters
2000 How the Grinch Stole Christmas 378499047 3134
2000 Mission: Impossible II 316822028 3653
2000 Gladiator 250066150 2938
2000 The Perfect Storm 280759344 3407
2000 Meet the Parents 205876597 2614

2. Flag Construction

We created two variables to enable stratified analysis:

df <- df %>%
  mutate(
    nber_flag    = factor(nber, levels = c(0, 1),
                          labels = c("Non-recession", "Recession")),
    stress_level = factor(
      case_when(
        nber == 1                  ~ "Stress",
        sentiment_at_release <= 80 ~ "Stress",
        sentiment_at_release <= 85 ~ "Elevated",
        TRUE                        ~ "Normal"
      ),
      levels = c("Normal", "Elevated", "Stress")
    ),
    primary = factor(qc_primary_classification)
  )

kable(
  df %>% count(stress_level),
  caption  = "Count of films by stress level",
  booktabs = TRUE
)
Count of films by stress level
stress_level n
Normal 2021
Elevated 443
Stress 1623

3. Distributional Statistics

We computed counts and distributional statistics of five-week real gross by primary emotion and stress level to identify over- and under-performing emotional tones under stress.

primary_summary <- df %>%
  group_by(primary, stress_level) %>%
  summarise(
    films        = n(),
    median_gross = median(five_week_real_gross, na.rm = TRUE),
    iqr_gross    = IQR(five_week_real_gross, na.rm = TRUE),
    .groups      = "drop"
  )

primary_summary %>%
  arrange(desc(films)) %>%
  slice_head(n = 5) %>%
  kable(
    caption  = "Top 5 emotion–stress combinations by film count",
    booktabs = TRUE,
    digits   = 0
  )
Top 5 emotion–stress combinations by film count
primary stress_level films median_gross iqr_gross
Humorous Escape Normal 473 48359377 79421155
Dark Empathy Normal 421 14159049 36878588
Dark Empathy Stress 402 12189890 39243084
Humorous Escape Stress 357 46218546 74529148
Heroic Escapism Normal 307 69767243 134011088

4. Baseline Association: Release Scale and Gross

We quantified the relationship between opening theaters and five-week real gross via a Pearson correlation (r ≈ 0.58, p < .001), setting expectations for how scale predicts revenue prior to stress interactions.

cor_test <- cor.test(df$opening_theaters, df$five_week_real_gross)
broom::tidy(cor_test) %>%
  select(estimate, p.value, conf.low, conf.high) %>%
  kable(
    caption  = "Release Scale ↔ Gross Correlation (r, p-value, CI)",
    digits   = 3,
    booktabs = TRUE
  )
Release Scale ↔︎ Gross Correlation (r, p-value, CI)
estimate p.value conf.low conf.high
0.578 0 0.557 0.598

5. Interaction Effects in Stress Context

We fit a linear model with stress-level interactions, confirming that theatrical scale yields diminishing returns under economic strain (opening_theaters × stress level interaction, ≈ –$4.7K per theater, p = 0.02). Emotional tone also emerged as a statistically significant factor in how films perform under pressure (p < 0.05 across several interactions). When economic stress weakens the financial impact of scale, which emotional tones help steady outcomes—and which make things worse?

# Impute missing openings and drop helper column
model_df <- df %>%
  group_by(year) %>%
  mutate(
    med_openings     = median(opening_theaters, na.rm = TRUE),
    opening_theaters = if_else(is.na(opening_theaters), med_openings, opening_theaters)
  ) %>%
  ungroup() %>%
  select(-med_openings)

# Fit interaction model
general_model <- lm(
  five_week_real_gross ~ opening_theaters * stress_level +
                         primary * stress_level +
                         sentiment_at_release,
  data = model_df
)

# Extract emotional tone × stress interactions from gross model
gross_terms <- tidy(general_model) %>%
  filter(str_detect(term, "primary") & str_detect(term, "stress_levelStress")) %>%
  mutate(primary = str_remove_all(term, "primary|:stress_levelStress")) %>%
  select(primary, gross_estimate = estimate)

# Extract and rename key coefficient for stress interaction
tidy(general_model) %>%
  filter(term == 'opening_theaters:stress_levelStress') %>%
  transmute(
    Term         = term,
    `Estimated Impact ($)` = estimate,
    StdError = std.error,
    TValue   = statistic,
    PValue.  = p.value
  ) %>%
  kable(
    caption = "Stress-Level Impact on Theater Scale",
    booktabs = TRUE,
    digits = 2
  )
Stress-Level Impact on Theater Scale
Term Estimated Impact ($) StdError TValue PValue.
opening_theaters:stress_levelStress -4699.26 2006.2 -2.34 0.02

6. Supply‐Bias Matrix

We built a supply-bias matrix of release counts and median grosses—grouped by year and emotional tone—to examine whether studios’ releases under stress actually reflected audience needs. By pairing release frequency with median gross, we disentangle what was offered from what resonated under economic stress. Some tones like Heroic Escapism dominate output, while others appear only sporadically but profitable—revealing an early hint of under-served potential.

# Create supply-bias matrix and preview Dark Empathy
supply_bias <- df %>%
  group_by(year, primary) %>%
  summarise(
    count_releases = n(),
    median_gross   = median(five_week_real_gross, na.rm = TRUE),
    .groups        = 'drop'
  ) %>%
  pivot_wider(
    names_from = primary,
    values_from = c(count_releases, median_gross)
  )

# Show a clean sample for one tone
supply_bias %>%
  select(year, `count_releases_Dark Empathy`, `median_gross_Dark Empathy`) %>%
  slice_head(n = 5) %>%
  kable(
    caption = "Supply Matrix Sample: Dark Empathy (First 5 Years)",
    booktabs = TRUE
  )
Supply Matrix Sample: Dark Empathy (First 5 Years)
year count_releases_Dark Empathy median_gross_Dark Empathy
2000 38 18411472
2001 37 21468625
2002 32 12733431
2003 30 7811404
2004 35 14691340

7. Missed Opportunities: Under‐Supplied, Overperforming Tones

We combined our Section 5 regression results with actual release counts during stress periods to identify which emotional tones were both scarce and high-performing. By ranking tones on modeled stress-interaction effects and matching those ranks to observed release counts, we expose a clear mismatch: the very tones audiences most needed under economic pressure were the ones studios under-invested in.
This raises a critical question—are we celebrating true audience resonance, or simply rewarding what made it to screen?

# Filter to stress period records only
stress_period <- df %>%
  filter(stress_level == "Stress")

# Summarize observed supply and performance under stress
stress_supply <- stress_period %>%
  group_by(primary) %>%
  summarise(
    releases_stress = n(),
    median_stress_gross = median(five_week_real_gross, na.rm = TRUE),
    .groups = "drop"
  )

# Extract model terms for tone × stress interactions
tone_interactions <- tidy(general_model) %>%
  filter(str_detect(term, "stress_levelStress:primary")) %>%
  mutate(primary = str_remove(term, "stress_levelStress:primary")) %>%
  select(primary, estimate)

# Merge with observed supply
missed_df <- stress_supply %>%
  left_join(tone_interactions, by = "primary") %>%
  arrange(releases_stress, desc(estimate))

# Preview top 5 under-supplied but high-performing tones
missed_df %>%
  slice_head(n = 5) %>%
  kable(
    caption = "Top 5 Missed Opportunities: Low Supply, High Estimated Performance Under Stress",
    booktabs = TRUE
  )
Top 5 Missed Opportunities: Low Supply, High Estimated Performance Under Stress
primary releases_stress median_stress_gross estimate
Collective Resilience 85 27819517 NA
Dystopian Catharsis 97 41342497 26695134
Comfort Nostalgia 98 30435885 23189931
Intelligent Rebellion 103 16904685 20735755
Redemptive Grit 227 13295759 19443971

8. Emotional Payoff vs. Box Office Performance Under Stress

While box office tells one story, emotional payoff tells another. We measure that payoff under stress across IMDb (audience), Rotten Tomatoes (hybrid), and Metascore (critic). Mean Rating Effect (MRE) puts everything in dollars: we convert each platform’s PaM—predict-at-median covariates, the modeled Stress–Normal difference holding opening scale and release sentiment at their medians—into expected gross using per-point betas, then average those dollarized ratings with the PaM gross delta for a typical film. Higher MRE signals deeper emotional resonance under stress; in our results, Comfort Nostalgia and Dark Empathy rise across platforms, Redemptive Grit connects more than it earns, and Heroic Escapism often sells tickets yet under-connects emotionally. The table highlights the top five tones by this blended uplift alongside their PaM gross

# Section 8: PaM (Stress − Normal) effects — gross, ratings, and MRE_pred
stopifnot(exists("df"))

# Fit general_model if missing (uses your Section 2 variables as-is)
if (!exists("general_model")) {
  dat2 <- df %>%
    dplyr::group_by(year) %>%
    dplyr::mutate(
      med_openings     = median(opening_theaters, na.rm = TRUE),
      opening_theaters = if_else(is.na(opening_theaters), med_openings, opening_theaters)
    ) %>%
    dplyr::ungroup() %>% dplyr::select(-med_openings)

  general_model <- lm(
    five_week_real_gross ~ opening_theaters * stress_level +
      primary * stress_level + sentiment_at_release,
    data = dat2
  )
}

# Levels from your data/model (handles three-level stress, we use only Normal/Stress for deltas)
prim_levels         <- levels(df$primary)
stress_levels_model <- levels(df$stress_level)              # e.g., c("Normal","Elevated","Stress")
stress_use          <- c("Normal","Stress")

# Representative (median) covariates
median_open <- median(df$opening_theaters, na.rm = TRUE)
median_sent <- median(df$sentiment_at_release, na.rm = TRUE)

# ---- PaM gross predictions (median covariates) ----
newdata_g <- expand.grid(
  primary              = prim_levels,
  opening_theaters     = median_open,
  sentiment_at_release = median_sent,
  stress_level         = stress_use,
  KEEP.OUT.ATTRS       = FALSE,
  stringsAsFactors     = FALSE
)
newdata_g$primary      <- factor(newdata_g$primary,      levels = prim_levels)
newdata_g$stress_level <- factor(newdata_g$stress_level, levels = stress_levels_model)

preds_g <- newdata_g %>%
  dplyr::mutate(pred = as.numeric(predict(general_model, newdata = newdata_g))) %>%
  dplyr::select(primary, stress_level, pred) %>%
  tidyr::pivot_wider(names_from = stress_level, values_from = pred)

pam_gross <- preds_g %>%
  dplyr::transmute(
    primary,
    gross_normal        = Normal,
    gross_stress        = Stress,
    gross_estimate_pred = Stress - Normal
  )

# ---- PaM rating deltas (Stress − Normal) ----
# IMDb
df_imdb <- df %>%
  dplyr::mutate(im_db_rating = suppressWarnings(as.numeric(im_db_rating))) %>%
  dplyr::filter(!is.na(im_db_rating))
m_imdb <- if (nrow(df_imdb) > 0) lm(im_db_rating ~ primary * stress_level, data = df_imdb) else NULL

# Rotten Tomatoes
df_rt <- df %>%
  dplyr::mutate(
    rotten_tomatoes = stringr::str_remove(rotten_tomatoes, "%"),
    rotten_tomatoes = suppressWarnings(as.numeric(rotten_tomatoes)),
    rotten_tomatoes = dplyr::if_else(rotten_tomatoes <= 1, rotten_tomatoes * 100, rotten_tomatoes)
  ) %>% dplyr::filter(!is.na(rotten_tomatoes))
m_rt <- if (nrow(df_rt) > 0) lm(rotten_tomatoes ~ primary * stress_level, data = df_rt) else NULL

# Metascore
df_meta <- df %>%
  dplyr::mutate(metascore = suppressWarnings(as.numeric(metascore))) %>%
  dplyr::filter(!is.na(metascore))
m_meta <- if (nrow(df_meta) > 0) lm(metascore ~ primary * stress_level, data = df_meta) else NULL

# Helper to get Stress−Normal by primary from a fitted rating model
rating_delta <- function(mod, colname) {
  if (is.null(mod)) return(tibble::tibble(primary = prim_levels, !!colname := NA_real_))
  pr <- expand.grid(primary = prim_levels, stress_level = stress_use, stringsAsFactors = FALSE)
  pr$primary      <- factor(pr$primary,      levels = prim_levels)
  pr$stress_level <- factor(pr$stress_level, levels = stress_levels_model)
  pred <- pr %>%
    dplyr::mutate(pred = as.numeric(predict(mod, newdata = pr))) %>%
    dplyr::select(primary, stress_level, pred) %>%
    tidyr::pivot_wider(names_from = stress_level, values_from = pred) %>%
    dplyr::transmute(primary, !!colname := Stress - Normal)
  pred
}

imdb_est <- rating_delta(m_imdb, "imdb_estimate")
rt_est   <- rating_delta(m_rt,   "rt_estimate")
meta_est <- rating_delta(m_meta, "meta_estimate")

# ---- Assemble console-aligned table + QC lock on PaM gross ----
final_tbl <- pam_gross %>%
  dplyr::left_join(imdb_est, by = "primary") %>%
  dplyr::left_join(rt_est,   by = "primary") %>%
  dplyr::left_join(meta_est, by = "primary") %>%
  dplyr::mutate(
    mean_rating_effect_pred = rowMeans(
      dplyr::across(c(imdb_estimate, rt_estimate, meta_estimate, gross_estimate_pred)),
      na.rm = TRUE
    )
  )

# QC: re-compute PaM gross deltas from preds_g and confirm exact match
delta_check <- preds_g %>% dplyr::transmute(primary, delta = Stress - Normal)
qc <- final_tbl %>% dplyr::left_join(delta_check, by = "primary") %>%
  dplyr::mutate(diff = abs(gross_estimate_pred - delta))
if (any(qc$diff > 1e-6, na.rm = TRUE)) stop("QC failed: PaM gross deltas drifted from locked values.")


# Print console-style table in the doc (top 5 rows)
knitr::kable(
  final_tbl %>%
    dplyr::arrange(dplyr::desc(mean_rating_effect_pred)) %>%
    dplyr::transmute(primary, gross_normal, gross_stress, gross_estimate_pred, mean_rating_effect_pred) %>%
    dplyr::slice_head(n = 5),
 caption = "Top 5 tones by PaM mean rating effect (MRE_pred) alongside PaM gross lift for a median-scale film.",
  booktabs = TRUE,
  digits = c(NA, 2, 2, 2, 2)
)
Top 5 tones by PaM mean rating effect (MRE_pred) alongside PaM gross lift for a median-scale film.
primary gross_normal gross_stress gross_estimate_pred mean_rating_effect_pred
Dystopian Catharsis 55872821 62711587 6838766 1709691.6
Heroic Escapism 105205723 109335615 4129893 1032473.0
Comfort Nostalgia 78308680 81642243 3333563 833394.1
Humorous Escape 68936233 71899114 2962880 740720.7
Dark Empathy 72092548 72989726 897178 224295.6

Top 5 tones by PaM mean rating effect (MRE_pred) alongside PaM gross lift for a median-scale film.

The data tells a subtle truth: When the economy falters, audiences crave more than distraction—they hunger for stories that mirror their own uncertainty. Big, wide releases aren’t the ones that automatically resonate. Sometimes it’s the films unafraid to confront hardship that linger—and discover meaning in the dark.

9. The Consistency Question: Mapping Emotional Predictability Under Stress

Emotional payoff is powerful—but is it predictable?
Some tones show high average returns, but their success is erratic. This section explores emotional volatility under economic stress, revealing which tones offer safe emotional returns—and which are wild cards.

# === Emotional Volatility Under Economic Stress ===
volatility_df <- df %>%
  filter(stress_level == "Stress") %>%
  group_by(primary) %>%
  summarise(
    n_titles     = n(),
    median_gross = median(five_week_real_gross, na.rm = TRUE),
    sd_gross     = sd(five_week_real_gross, na.rm = TRUE),
    .groups      = "drop"
  ) %>%
  arrange(desc(sd_gross))

volatility_df %>%
  slice_head(n = 5) %>%
  kable(
    caption  = "Top 5 Most Volatile Emotional Tones (Standard Deviation of Gross Under Stress)",
    booktabs = TRUE,
    digits   = 0
  )
Top 5 Most Volatile Emotional Tones (Standard Deviation of Gross Under Stress)
primary n_titles median_gross sd_gross
Heroic Escapism 254 74587978 137734453
Collective Resilience 85 27819517 90252420
Dystopian Catharsis 97 41342497 78612312
Humorous Escape 357 46218546 73221154
Comfort Nostalgia 98 30435885 70887268

10. Emotional Tone Pairings Under Stress

Most films in the dataset are not anchored by a single emotional identity. While our primary analyses isolate tone-level effects to maintain clean attribution and frame the emotional identity of a film, nearly every film carries a secondary emotional resonance that shapes its nuance.

By summarizing average gross and IMDb performance for each Primary × Secondary pairing during high-stress periods, we explore how emotional blends shape audience connection.This analysis evaluates tone pairings during high-stress periods, revealing how secondary emotions amplify—or dilute—the commercial and emotional impact of the primary. By calculating each pairing’s relative lift, we surface combinations that consistently outperform expectations.

This table highlights the top-performing emotional tone pairings under stress, sorted by average 5-week real gross. It complements our primary-only models by revealing how layered emotional architecture—through secondary tones—shaped both audience resonance and box office impact.

# Filter to only films released during stress periods
combo_summary <- df %>%
  filter(stress_level == "Stress") %>%
  mutate(
    gross = five_week_real_gross,
    imdb = as.numeric(im_db_rating),
    primary = as.factor(primary),
    secondary = as.factor(qc_secondary_classification)
  ) %>%
  group_by(primary, secondary) %>%
  summarise(
    n = n(),
    avg_gross_pairing = mean(gross, na.rm = TRUE),
    avg_imdb = mean(imdb, na.rm = TRUE),
    .groups = "drop"
  )
# Calculate average gross per primary tone
primary_avg <- combo_summary %>%
  group_by(primary) %>%
  summarise(
    avg_gross_primary = mean(avg_gross_pairing, na.rm = TRUE),
    .groups = "drop"
  )
# Join and calculate relative lift
combo_with_lift <- combo_summary %>%
  left_join(primary_avg, by = "primary") %>%
  mutate(
    relative_lift = avg_gross_pairing / avg_gross_primary
  ) %>%
  arrange(desc(relative_lift))
# Display top 5 by relative lift
combo_with_lift %>%
  slice_max(relative_lift, n = 5) %>%
  kable(
    caption = "Top 5 Emotional Tone Pairings by Relative Lift (Stress Periods)",
    booktabs = TRUE,
    digits = 2
      )
Top 5 Emotional Tone Pairings by Relative Lift (Stress Periods)
primary secondary n avg_gross_pairing avg_imdb avg_gross_primary relative_lift
Dystopian Catharsis Redemptive Grit 2 241964308 6.35 86743551 2.79
Intelligent Rebellion Heroic Escapism 6 96396068 6.65 41029590 2.35
Dark Empathy Heroic Escapism 12 99986956 6.78 47270442 2.12
Heroic Escapism Comfort Nostalgia 6 261126740 6.57 127264344 2.05
Redemptive Grit Heroic Escapism 10 68332673 6.51 39114134 1.75

Data Outputs for Analysis and Tableau

Below is a summary of all R-generated CSV files used in the final analysis and Tableau visualizations.

Methods & Session Info

Expand
 ─ Session info ───────────────────────────────────────────────────────────────
 setting  value
 version  R version 4.4.3 (2025-02-28)
 os       macOS Sequoia 15.4.1
 system   aarch64, darwin20
 ui       X11
 language (EN)
 collate  en_US.UTF-8
 ctype    en_US.UTF-8
 tz       America/Los_Angeles
 date     2025-08-14
 pandoc   3.4 @ /Applications/RStudio.app/Contents/Resources/app/quarto/bin/tools/aarch64/ (via rmarkdown)
 quarto   1.6.42 @ /Applications/RStudio.app/Contents/Resources/app/quarto/bin/quarto

─ Packages ───────────────────────────────────────────────────────────────────
 package     * version date (UTC) lib source
 backports     1.5.0   2024-05-23 [1] CRAN (R 4.4.1)
 bit           4.6.0   2025-03-06 [1] CRAN (R 4.4.1)
 bit64         4.6.0-1 2025-01-16 [1] CRAN (R 4.4.1)
 broom       * 1.0.8   2025-03-28 [1] CRAN (R 4.4.1)
 bslib         0.9.0   2025-01-30 [1] CRAN (R 4.4.1)
 cachem        1.1.0   2024-05-16 [1] CRAN (R 4.4.1)
 cli           3.6.4   2025-02-13 [1] CRAN (R 4.4.1)
 crayon        1.5.3   2024-06-20 [1] CRAN (R 4.4.1)
 digest        0.6.37  2024-08-19 [1] CRAN (R 4.4.1)
 dplyr       * 1.1.4   2023-11-17 [1] CRAN (R 4.4.0)
 evaluate      1.0.3   2025-01-10 [1] CRAN (R 4.4.1)
 fastmap       1.2.0   2024-05-15 [1] CRAN (R 4.4.1)
 generics      0.1.3   2022-07-05 [1] CRAN (R 4.4.1)
 glue          1.8.0   2024-09-30 [1] CRAN (R 4.4.1)
 hms           1.1.3   2023-03-21 [1] CRAN (R 4.4.0)
 htmltools     0.5.8.1 2024-04-04 [1] CRAN (R 4.4.1)
 janitor     * 2.2.1   2024-12-22 [1] CRAN (R 4.4.1)
 jquerylib     0.1.4   2021-04-26 [1] CRAN (R 4.4.0)
 jsonlite      2.0.0   2025-03-27 [1] CRAN (R 4.4.1)
 knitr       * 1.50    2025-03-16 [1] CRAN (R 4.4.1)
 lifecycle     1.0.4   2023-11-07 [1] CRAN (R 4.4.1)
 lubridate   * 1.9.4   2024-12-08 [1] CRAN (R 4.4.1)
 magrittr      2.0.3   2022-03-30 [1] CRAN (R 4.4.1)
 pillar        1.10.2  2025-04-05 [1] CRAN (R 4.4.1)
 pkgconfig     2.0.3   2019-09-22 [1] CRAN (R 4.4.1)
 purrr         1.0.4   2025-02-05 [1] CRAN (R 4.4.1)
 R6            2.6.1   2025-02-15 [1] CRAN (R 4.4.1)
 readr       * 2.1.5   2024-01-10 [1] CRAN (R 4.4.0)
 rlang         1.1.5   2025-01-17 [1] CRAN (R 4.4.1)
 rmarkdown     2.29    2024-11-04 [1] CRAN (R 4.4.1)
 rstudioapi    0.17.1  2024-10-22 [1] CRAN (R 4.4.1)
 sass          0.4.9   2024-03-15 [1] CRAN (R 4.4.0)
 sessioninfo   1.2.3   2025-02-05 [1] CRAN (R 4.4.1)
 snakecase     0.11.1  2023-08-27 [1] CRAN (R 4.4.0)
 stringi       1.8.7   2025-03-27 [1] CRAN (R 4.4.1)
 stringr     * 1.5.1   2023-11-14 [1] CRAN (R 4.4.0)
 tibble        3.2.1   2023-03-20 [1] CRAN (R 4.4.0)
 tidyr       * 1.3.1   2024-01-24 [1] CRAN (R 4.4.1)
 tidyselect    1.2.1   2024-03-11 [1] CRAN (R 4.4.0)
 timechange    0.3.0   2024-01-18 [1] CRAN (R 4.4.1)
 tzdb          0.5.0   2025-03-15 [1] CRAN (R 4.4.1)
 vctrs         0.6.5   2023-12-01 [1] CRAN (R 4.4.0)
 vroom         1.6.5   2023-12-05 [1] CRAN (R 4.4.0)
 withr         3.0.2   2024-10-28 [1] CRAN (R 4.4.1)
 xfun          0.52    2025-04-02 [1] CRAN (R 4.4.1)
 yaml          2.3.10  2024-07-26 [1] CRAN (R 4.4.1)

 [1] /Library/Frameworks/R.framework/Versions/4.4-arm64/Resources/library
 * ── Packages attached to the search path.

──────────────────────────────────────────────────────────────────────────────