### Random effect: Should I stay or should I go?

Posted on Updated on

One of the more common questions I get about mixed models is whether there are any standards regarding the removal of random effects from the model. When should a random effect be included in the model? This was also one of the questions we had hope to answer for our field (psycholinguistics) in the pre-CUNY Workshop on Ordinary and Multilevel Models (WOMM), but I don’t think we got anywhere close to a “standard” (see Harald Baayen’s presentation on understanding random effect correlations though for a very insightful discussion).

That being said, I find most of us would probably agree on a set of rules of thumb, at least for factorial analyses of balanced data:

• for balanced data sets, start with fully crossed and fully specified random effects, e.g. for y ~ a*b have lmer(y ~ a * b + (1 + a * b | subject) +(1 + a * b | item), data)
• if that does not converge because any of the to-be-estimated variances of the random effects are effectively zero, than simplify, e.g.
• lmer(y ~ a * b + (1 + a + b | subject) + (1 + a + b | item), data)
• lmer(y ~ a * b + (1 + a * b | subject) + (1 + a | item), data) or lmer(y ~ a * b + (1 + a * b | subject) + (1 + b | item), data)
• lmer(y ~ a * b + (1 + a * b | subject) + (1 | item), data)
• etc. I usually reduce the item effects first, because (at least in researcher-made experiments) item variances usually seem to be much much smaller than subject variances in experiments designed by researchers (but not in corpus or dialogue studies).
• at some point this will converge
• check the correlations between random effects (see Baayen’s WOMM presentation, too — available on the blog [google: hlp lab womm, linked to schedule]). If there are high correlations, check whether you can further remove random effect terms (following the hierarchy principle). Use the procedures outlined in, e.g. Baayen, Davidson, Bates, 2008 (JML) or Baayen 2008 (book) for random effect model comparisons. I use REML-fitted models if I want to test whether removal of a random term is significant for a linear mixed effect model (b/c REML is less biased in estimating variances than ML), but apparently the parameter estimates are usually very similar for both estimation methods anyway.

I would call the resulting random effect structure, the “maximal random effect structure justified by model comparison/supported by the data” (given the random effects considered, e.g. subject and items).

The function aovlmer.fnc() in Baayen’s languageR library (for R) allows comparisons of models that differ only in terms of random effects. I also expect there to be functions pretty soon that automate this process somewhat.

## 8 thoughts on “Random effect: Should I stay or should I go?”

tiflo said:
February 21, 2011 at 11:36 pm

For anyone who’s interested in these issues: there was some discussion on R-lang that is highly relevant to the above issue. I outlined a more detailed strategy as to how to proceed in building an appropriate random effect structure for a simple mixed effect model: https://mailman.ucsd.edu/pipermail/ling-r-lang-l/2011-February/000225.html

Liked by 1 person

[…] might be helpful to some. I also took it as an opportunity to updated the procedure I described at https://hlplab.wordpress.com/2009/05/14/random-effect-structure/. As always, comments are welcome. What I am writing below are just suggestions. […] an […]

Like

Mixed Effect Models | Social by Selection said:
July 23, 2015 at 5:02 pm

[…] main effect goes nonsignificant when adding random effects Building random effects: One, Jaeger1, Jaeger2 On choosing fixed or random for variables: Bell, Littel, Statsxchange, AFactor Examples […]

Like

Dave said:
May 4, 2016 at 3:48 pm

What is the “hierarchy principle”?

Like

tiflo responded:
May 5, 2016 at 8:20 am

Here it refers to removing more complex terms from the model before removing their components.

Like

Berna said:
May 11, 2017 at 11:53 pm

Thank you for this post. I read Barr et al.’s paper (2013) “Keep it maximal” that suggests that adding random slopes which cannot be estimated is unnecessary. For instance, if a subject is only in one condition of an experimental design, then adding (1+Condition|Subject) is not needed because it cannot be estimated. What are your thoughts on that?

I am also not sure about how to construct the random effects structure when including participant characteristics (such as age, cognitive measures) in the model where subject and item are random effects.

Thank you!

Like

tiflo responded:
May 12, 2017 at 9:10 am

Between-grouping variables (like the between-subject condition you mention) do indeed not require random slopes by that group variable. Unless you have age or cognitive measures varying within-subject (e.g., because it’s a longitudinal study), no by-subject random slopes should be included. That’s parallel to the logic of ANOVA for within- vs. between-subject or -item factors.

But even if they vary within subjects, you might not have enough data to include by-subjects random slopes (see a related discussion in my 2011 Linguistic Typology paper). I’d also read the recent 2015 paper by Bates et al, which does an excellent job in setting a counterpoint to Barr et al.

Like

Berna said:
May 12, 2017 at 9:36 am

Thank you for your quick response. I encountered the paper you mentioned today and I will read it. Indeed and especially with logit models, my experience is that it is difficult to fit models with many random effects because they do not tend to converge.

Thank you very much.
Regards,

Like