Install packages if they aren’t already installed (notice the nice compact code here that avoids re-installing the package if it already is found in your R environment).
if (!("knitr" %in% installed.packages())) install.packages("knitr")
if (!("tidyverse" %in% installed.packages())) install.packages("tidyverse")
if (!("here" %in% installed.packages())) install.packages("here")
if (!("brms" %in% installed.packages())) install.packages("brms")
if (!("broom.mixed" %in% installed.packages())) install.packages("broom.mixed")
if (!("tidybayes" %in% installed.packages())) install.packages("tidybayes")
if (!("bayesplot" %in% installed.packages())) install.packages("bayesplot")
if (!("modelr" %in% installed.packages())) install.packages("modelr")
Load libraries and set up paths
library(knitr)
library(brms)
library(tidyverse)
library(here)
library(tidybayes)
library(broom.mixed)
library(bayesplot)
library(modelr)
data_path <- here("data")
An elegant aspect of R’s infrastructure is that we can also easily adapt our analyses to a Bayesian regression approach. Here we’ll use the “brms” package to fit a Bayesian linear mixed-effects model to the same data we used in the introduction to linear mixed-effects models. Check out the brms documentation for many more vignettes and examples here: https://paulbuerkner.com/brms/index.html
Load data, and name that object “rt_data”
rt_data <- read_csv(here(data_path,"rt_dummy_data.csv"))
## Rows: 21679 Columns: 4
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## chr (2): modality, stim
## dbl (2): PID, RT
##
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
rt_data <- rt_data %>%
mutate(
#modality_c = ifelse(modality=="Audio-only",-0.5,0.5),
log_rt = log(RT)
)
Let’s first fit a Bayesian regression model predicting log reaction time from modality, while accounting for the nested structure of the data, while leaving all of the model parameters at their defaults.
Notice that the core model syntax is exactly the same as for lme4.
#this takes a while to fit - note the use of caching to save time on future iterations
rt_mod_log <- brm(log_rt ~ 1 + modality +
(1 + modality|PID),
data = rt_data)
## Running /Library/Frameworks/R.framework/Resources/bin/R CMD SHLIB foo.c
## using C compiler: ‘Apple clang version 16.0.0 (clang-1600.0.26.6)’
## using SDK: ‘MacOSX15.2.sdk’
## clang -arch arm64 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I"/Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/Rcpp/include/" -I"/Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/RcppEigen/include/" -I"/Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/RcppEigen/include/unsupported" -I"/Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/BH/include" -I"/Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/StanHeaders/include/src/" -I"/Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/StanHeaders/include/" -I"/Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/RcppParallel/include/" -I"/Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/rstan/include" -DEIGEN_NO_DEBUG -DBOOST_DISABLE_ASSERTS -DBOOST_PENDING_INTEGER_LOG2_HPP -DSTAN_THREADS -DUSE_STANC3 -DSTRICT_R_HEADERS -DBOOST_PHOENIX_NO_VARIADIC_EXPRESSION -D_HAS_AUTO_PTR_ETC=0 -include '/Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/StanHeaders/include/stan/math/prim/fun/Eigen.hpp' -D_REENTRANT -DRCPP_PARALLEL_USE_TBB=1 -I/opt/R/arm64/include -fPIC -falign-functions=64 -Wall -g -O2 -c foo.c -o foo.o
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/StanHeaders/include/stan/math/prim/fun/Eigen.hpp:22:
## In file included from /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/RcppEigen/include/Eigen/Dense:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/RcppEigen/include/Eigen/Core:19:
## /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:679:10: fatal error: 'cmath' file not found
## 679 | #include <cmath>
## | ^~~~~~~
## 1 error generated.
## make: *** [foo.o] Error 1
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 0.001257 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 12.57 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 100.511 seconds (Warm-up)
## Chain 1: 58.954 seconds (Sampling)
## Chain 1: 159.465 seconds (Total)
## Chain 1:
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
## Chain 2:
## Chain 2: Gradient evaluation took 0.000809 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 8.09 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2:
## Chain 2:
## Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 2: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 2:
## Chain 2: Elapsed Time: 106.268 seconds (Warm-up)
## Chain 2: 54.6 seconds (Sampling)
## Chain 2: 160.868 seconds (Total)
## Chain 2:
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
## Chain 3:
## Chain 3: Gradient evaluation took 0.000821 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 8.21 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3:
## Chain 3:
## Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 3: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 3:
## Chain 3: Elapsed Time: 89.373 seconds (Warm-up)
## Chain 3: 79.863 seconds (Sampling)
## Chain 3: 169.236 seconds (Total)
## Chain 3:
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
## Chain 4:
## Chain 4: Gradient evaluation took 0.00078 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 7.8 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4:
## Chain 4:
## Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 4: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 4:
## Chain 4: Elapsed Time: 104.695 seconds (Warm-up)
## Chain 4: 54.658 seconds (Sampling)
## Chain 4: 159.353 seconds (Total)
## Chain 4:
Inspect the summary output.
summary(rt_mod_log)
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: log_rt ~ 1 + modality + (1 + modality | PID)
## Data: rt_data (Number of observations: 21679)
## Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
## total post-warmup draws = 4000
##
## Multilevel Hyperparameters:
## ~PID (Number of levels: 53)
## Estimate Est.Error l-95% CI u-95% CI Rhat
## sd(Intercept) 0.16 0.02 0.14 0.20 1.01
## sd(modalityAudiovisual) 0.08 0.01 0.06 0.10 1.00
## cor(Intercept,modalityAudiovisual) -0.27 0.13 -0.53 0.00 1.00
## Bulk_ESS Tail_ESS
## sd(Intercept) 557 972
## sd(modalityAudiovisual) 1123 2058
## cor(Intercept,modalityAudiovisual) 1153 1625
##
## Regression Coefficients:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 6.91 0.02 6.86 6.95 1.03 228 601
## modalityAudiovisual 0.08 0.01 0.06 0.10 1.01 632 1526
##
## Further Distributional Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 0.25 0.00 0.24 0.25 1.00 10132 2755
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Summarize specific coefficients using tidybayes
.
rt_mod_log %>%
spread_draws(b_modalityAudiovisual) %>%
median_qi()
## # A tibble: 1 × 6
## b_modalityAudiovisual .lower .upper .width .point .interval
## <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 0.0798 0.0566 0.103 0.95 median qi
Check the posterior predictive distribution, i.e. compare the observed data (y) to simulated data from the posterior predictive distribution (y_rep). If the model fits the data well, then data we generate (simulate) from the model should look similar to the raw data observed.
pp_check(rt_mod_log)
## Using 10 posterior draws for ppc type 'dens_overlay' by default.
#plot(rt_mod_log)
#plot estimated means for each condition tidybayes
plot(conditional_effects(rt_mod_log))
posterior <- as.matrix(rt_mod_log)
bayesplot::mcmc_areas(posterior,
pars = c("b_modalityAudiovisual"),
# arbitrary threshold for shading probability mass
prob = 0.83)
post_data <- rt_data %>%
data_grid(modality) %>%
add_epred_draws(rt_mod_log, ndraws = 4000, re_formula = NA)
#aggregate participant data
rt_data_agg <- rt_data %>%
group_by(PID, modality) %>%
summarise(avg_rt = mean(log_rt), .groups = "drop")
post_data %>%
ggplot(aes(x=modality,y=.epred,fill=modality))+
#geom_jitter(data=rt_data_agg,aes(y=avg_rt,color=modality),position = position_nudge(x = -0.1),alpha=0.5,stroke=NA)+
stat_halfeye()+
ylab("Log Reaction Time (ms)")+
xlab("Modality")+
theme_minimal()+
theme(legend.position="none")
Second, Bayesian models are extremely flexible, but that means we also need to specify many aspects of our model more explicitly. Two key features of the model to specify are the model family and the priors. Can you figure out what family was used in the above example?
#extract the family
family(rt_mod_log)
##
## Family: gaussian
## Link function: identity
#extract the priors
prior_summary(rt_mod_log)
## prior class coef group resp dpar nlpar lb ub
## (flat) b
## (flat) b modalityAudiovisual
## student_t(3, 7, 2.5) Intercept
## lkj_corr_cholesky(1) L
## lkj_corr_cholesky(1) L PID
## student_t(3, 0, 2.5) sd 0
## student_t(3, 0, 2.5) sd PID 0
## student_t(3, 0, 2.5) sd Intercept PID 0
## student_t(3, 0, 2.5) sd modalityAudiovisual PID 0
## student_t(3, 0, 2.5) sigma 0
## source
## default
## (vectorized)
## default
## default
## (vectorized)
## default
## (vectorized)
## (vectorized)
## (vectorized)
## default
The model family is specified with the family
argument.
Here, we’ll now set the Gaussian family explicitly (i.e., assuming a
normal distribution), which is the default for continuous outcomes.
Next, we show an example of how to set the model parameters (coefficients, SDs and covariances) more explicitly.
#set the prior
priors <-c(
set_prior("normal(6.9, 1)", class = "Intercept"), #normal distribution for intercept around ~ log(1000) ms (just for illustration)
set_prior("normal(0, 1)", class = "b"), # normal distribution for fixed-effect coefficients
set_prior("normal(0, 0.5)", class = "sd"), #normal distribution for sd/ random effects
set_prior("lkj(2)", class = "L")) # lkj distribution for covariance matrix
#this takes a while to fit - note the use of caching to save time on future iterations
rt_mod_log_w_priors <- brm(log_rt ~ 1 + modality +
(1 + modality|PID),
data = rt_data,
prior=priors, #our specified priors
family=gaussian() #could also set log as link function
)
## Running /Library/Frameworks/R.framework/Resources/bin/R CMD SHLIB foo.c
## using C compiler: ‘Apple clang version 16.0.0 (clang-1600.0.26.6)’
## using SDK: ‘MacOSX15.2.sdk’
## clang -arch arm64 -I"/Library/Frameworks/R.framework/Resources/include" -DNDEBUG -I"/Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/Rcpp/include/" -I"/Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/RcppEigen/include/" -I"/Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/RcppEigen/include/unsupported" -I"/Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/BH/include" -I"/Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/StanHeaders/include/src/" -I"/Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/StanHeaders/include/" -I"/Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/RcppParallel/include/" -I"/Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/rstan/include" -DEIGEN_NO_DEBUG -DBOOST_DISABLE_ASSERTS -DBOOST_PENDING_INTEGER_LOG2_HPP -DSTAN_THREADS -DUSE_STANC3 -DSTRICT_R_HEADERS -DBOOST_PHOENIX_NO_VARIADIC_EXPRESSION -D_HAS_AUTO_PTR_ETC=0 -include '/Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/StanHeaders/include/stan/math/prim/fun/Eigen.hpp' -D_REENTRANT -DRCPP_PARALLEL_USE_TBB=1 -I/opt/R/arm64/include -fPIC -falign-functions=64 -Wall -g -O2 -c foo.c -o foo.o
## In file included from <built-in>:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/StanHeaders/include/stan/math/prim/fun/Eigen.hpp:22:
## In file included from /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/RcppEigen/include/Eigen/Dense:1:
## In file included from /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/RcppEigen/include/Eigen/Core:19:
## /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/library/RcppEigen/include/Eigen/src/Core/util/Macros.h:679:10: fatal error: 'cmath' file not found
## 679 | #include <cmath>
## | ^~~~~~~
## 1 error generated.
## make: *** [foo.o] Error 1
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
## Chain 1:
## Chain 1: Gradient evaluation took 0.001343 seconds
## Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 13.43 seconds.
## Chain 1: Adjust your expectations accordingly!
## Chain 1:
## Chain 1:
## Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 1:
## Chain 1: Elapsed Time: 82.25 seconds (Warm-up)
## Chain 1: 55.792 seconds (Sampling)
## Chain 1: 138.042 seconds (Total)
## Chain 1:
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
## Chain 2:
## Chain 2: Gradient evaluation took 0.000805 seconds
## Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 8.05 seconds.
## Chain 2: Adjust your expectations accordingly!
## Chain 2:
## Chain 2:
## Chain 2: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 2: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 2: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 2: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 2: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 2: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 2: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 2: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 2: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 2: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 2: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 2: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 2:
## Chain 2: Elapsed Time: 85.742 seconds (Warm-up)
## Chain 2: 58.571 seconds (Sampling)
## Chain 2: 144.313 seconds (Total)
## Chain 2:
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
## Chain 3:
## Chain 3: Gradient evaluation took 0.000808 seconds
## Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 8.08 seconds.
## Chain 3: Adjust your expectations accordingly!
## Chain 3:
## Chain 3:
## Chain 3: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 3: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 3: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 3: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 3: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 3: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 3: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 3: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 3: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 3: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 3: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 3: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 3:
## Chain 3: Elapsed Time: 129.488 seconds (Warm-up)
## Chain 3: 56.283 seconds (Sampling)
## Chain 3: 185.771 seconds (Total)
## Chain 3:
##
## SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
## Chain 4:
## Chain 4: Gradient evaluation took 0.000803 seconds
## Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 8.03 seconds.
## Chain 4: Adjust your expectations accordingly!
## Chain 4:
## Chain 4:
## Chain 4: Iteration: 1 / 2000 [ 0%] (Warmup)
## Chain 4: Iteration: 200 / 2000 [ 10%] (Warmup)
## Chain 4: Iteration: 400 / 2000 [ 20%] (Warmup)
## Chain 4: Iteration: 600 / 2000 [ 30%] (Warmup)
## Chain 4: Iteration: 800 / 2000 [ 40%] (Warmup)
## Chain 4: Iteration: 1000 / 2000 [ 50%] (Warmup)
## Chain 4: Iteration: 1001 / 2000 [ 50%] (Sampling)
## Chain 4: Iteration: 1200 / 2000 [ 60%] (Sampling)
## Chain 4: Iteration: 1400 / 2000 [ 70%] (Sampling)
## Chain 4: Iteration: 1600 / 2000 [ 80%] (Sampling)
## Chain 4: Iteration: 1800 / 2000 [ 90%] (Sampling)
## Chain 4: Iteration: 2000 / 2000 [100%] (Sampling)
## Chain 4:
## Chain 4: Elapsed Time: 88.537 seconds (Warm-up)
## Chain 4: 66.983 seconds (Sampling)
## Chain 4: 155.52 seconds (Total)
## Chain 4:
summary(rt_mod_log_w_priors)
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: log_rt ~ 1 + modality + (1 + modality | PID)
## Data: rt_data (Number of observations: 21679)
## Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
## total post-warmup draws = 4000
##
## Multilevel Hyperparameters:
## ~PID (Number of levels: 53)
## Estimate Est.Error l-95% CI u-95% CI Rhat
## sd(Intercept) 0.16 0.02 0.14 0.20 1.00
## sd(modalityAudiovisual) 0.08 0.01 0.06 0.10 1.00
## cor(Intercept,modalityAudiovisual) -0.27 0.13 -0.51 -0.00 1.00
## Bulk_ESS Tail_ESS
## sd(Intercept) 662 1245
## sd(modalityAudiovisual) 906 1490
## cor(Intercept,modalityAudiovisual) 1202 1872
##
## Regression Coefficients:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 6.91 0.02 6.87 6.96 1.02 278 454
## modalityAudiovisual 0.08 0.01 0.06 0.10 1.01 833 1560
##
## Further Distributional Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 0.25 0.00 0.24 0.25 1.00 10094 2982
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
post_data_w_priors <- rt_data %>%
data_grid(modality) %>%
add_epred_draws(rt_mod_log_w_priors, ndraws = 4000, re_formula = NA)
post_data_w_priors %>%
ggplot(aes(x=modality,y=.epred,fill=modality))+
stat_halfeye()+
ylab("Log Reaction Time (ms)")+
xlab("Modality")+
theme_minimal()+
theme(legend.position="none")
sessionInfo()
## R version 4.3.2 (2023-10-31)
## Platform: aarch64-apple-darwin20 (64-bit)
## Running under: macOS Sonoma 14.7.2
##
## Matrix products: default
## BLAS: /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/lib/libRblas.0.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/4.3-arm64/Resources/lib/libRlapack.dylib; LAPACK version 3.11.0
##
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
##
## time zone: America/Los_Angeles
## tzcode source: internal
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] modelr_0.1.11 bayesplot_1.11.1 broom.mixed_0.2.9.6
## [4] tidybayes_3.0.6 here_1.0.1 lubridate_1.9.3
## [7] forcats_1.0.0 stringr_1.5.1 dplyr_1.1.4
## [10] purrr_1.0.2 readr_2.1.5 tidyr_1.3.1
## [13] tibble_3.2.1 ggplot2_3.5.1 tidyverse_2.0.0
## [16] brms_2.21.0 Rcpp_1.0.12 knitr_1.45
##
## loaded via a namespace (and not attached):
## [1] gridExtra_2.3 inline_0.3.19 sandwich_3.1-0
## [4] rlang_1.1.3 magrittr_2.0.3 multcomp_1.4-25
## [7] furrr_0.3.1 ggridges_0.5.6 matrixStats_1.3.0
## [10] compiler_4.3.2 loo_2.7.0 callr_3.7.6
## [13] vctrs_0.6.5 reshape2_1.4.4 pkgconfig_2.0.3
## [16] arrayhelpers_1.1-0 crayon_1.5.2 fastmap_1.1.1
## [19] backports_1.4.1 labeling_0.4.3 utf8_1.2.4
## [22] rmarkdown_2.26 tzdb_0.4.0 ps_1.7.6
## [25] bit_4.0.5 xfun_0.43 cachem_1.0.8
## [28] jsonlite_1.8.8 highr_0.10 broom_1.0.5
## [31] parallel_4.3.2 R6_2.5.1 bslib_0.7.0
## [34] stringi_1.8.3 StanHeaders_2.32.6 parallelly_1.37.1
## [37] jquerylib_0.1.4 estimability_1.5 rstan_2.32.6
## [40] zoo_1.8-12 Matrix_1.6-5 splines_4.3.2
## [43] timechange_0.3.0 tidyselect_1.2.1 rstudioapi_0.16.0
## [46] abind_1.4-5 yaml_2.3.8 codetools_0.2-20
## [49] curl_5.2.1 processx_3.8.4 listenv_0.9.1
## [52] pkgbuild_1.4.4 plyr_1.8.9 lattice_0.22-6
## [55] withr_3.0.0 bridgesampling_1.1-2 posterior_1.5.0
## [58] coda_0.19-4.1 evaluate_0.23 future_1.33.2
## [61] survival_3.5-8 RcppParallel_5.1.7 ggdist_3.3.2
## [64] pillar_1.9.0 tensorA_0.36.2.1 checkmate_2.3.1
## [67] stats4_4.3.2 distributional_0.4.0 generics_0.1.3
## [70] vroom_1.6.5 rprojroot_2.0.4 hms_1.1.3
## [73] rstantools_2.4.0 munsell_0.5.1 scales_1.3.0
## [76] globals_0.16.3 xtable_1.8-4 glue_1.7.0
## [79] emmeans_1.10.1 tools_4.3.2 mvtnorm_1.2-4
## [82] grid_4.3.2 QuickJSR_1.1.3 colorspace_2.1-0
## [85] nlme_3.1-164 cli_3.6.2 fansi_1.0.6
## [88] svUnit_1.0.6 Brobdingnag_1.2-9 V8_4.4.2
## [91] gtable_0.3.4 sass_0.4.9 digest_0.6.35
## [94] TH.data_1.1-2 farver_2.1.1 htmltools_0.5.8.1
## [97] lifecycle_1.0.4 bit64_4.0.5 MASS_7.3-60.0.1