Random effect: Should I stay or should I go?

Posted on Updated on


One of the more common questions I get about mixed models is whether there are any standards regarding the removal of random effects from the model. When should a random effect be included in the model? This was also one of the questions we had hope to answer for our field (psycholinguistics) in the pre-CUNY Workshop on Ordinary and Multilevel Models (WOMM), but I don’t think we got anywhere close to a “standard” (see Harald Baayen’s presentation on understanding random effect correlations though for a very insightful discussion).

That being said, I find most of us would probably agree on a set of rules of thumb, at least for factorial analyses of balanced data:

  • for balanced data sets, start with fully crossed and fully specified random effects, e.g. for y ~ a*b have lmer(y ~ a * b + (1 + a * b | subject) +(1 + a * b | item), data)
  • if that does not converge because any of the to-be-estimated variances of the random effects are effectively zero, than simplify, e.g.
    • lmer(y ~ a * b + (1 + a + b | subject) + (1 + a + b | item), data)
    • lmer(y ~ a * b + (1 + a * b | subject) + (1 + a | item), data) or lmer(y ~ a * b + (1 + a * b | subject) + (1 + b | item), data)
    • lmer(y ~ a * b + (1 + a * b | subject) + (1 | item), data)
    • etc. I usually reduce the item effects first, because (at least in researcher-made experiments) item variances usually seem to be much much smaller than subject variances in experiments designed by researchers (but not in corpus or dialogue studies).
    • at some point this will converge
  • check the correlations between random effects (see Baayen’s WOMM presentation, too — available on the blog [google: hlp lab womm, linked to schedule]). If there are high correlations, check whether you can further remove random effect terms (following the hierarchy principle). Use the procedures outlined in, e.g. Baayen, Davidson, Bates, 2008 (JML) or Baayen 2008 (book) for random effect model comparisons. I use REML-fitted models if I want to test whether removal of a random term is significant for a linear mixed effect model (b/c REML is less biased in estimating variances than ML), but apparently the parameter estimates are usually very similar for both estimation methods anyway.

I would call the resulting random effect structure, the “maximal random effect structure justified by model comparison/supported by the data” (given the random effects considered, e.g. subject and items).

The function aovlmer.fnc() in Baayen’s languageR library (for R) allows comparisons of models that differ only in terms of random effects. I also expect there to be functions pretty soon that automate this process somewhat.

As always, updates, comments, and questions are welcome.

Advertisements

5 thoughts on “Random effect: Should I stay or should I go?

    tiflo said:
    February 21, 2011 at 11:36 pm

    For anyone who’s interested in these issues: there was some discussion on R-lang that is highly relevant to the above issue. I outlined a more detailed strategy as to how to proceed in building an appropriate random effect structure for a simple mixed effect model: https://mailman.ucsd.edu/pipermail/ling-r-lang-l/2011-February/000225.html

    See also the follow-up clarifications and questions linked to that post.

    Liked by 1 person

    […] might be helpful to some. I also took it as an opportunity to updated the procedure I described at https://hlplab.wordpress.com/2009/05/14/random-effect-structure/. As always, comments are welcome. What I am writing below are just suggestions. […] an […]

    Like

    Mixed Effect Models | Social by Selection said:
    July 23, 2015 at 5:02 pm

    […] main effect goes nonsignificant when adding random effects Building random effects: One, Jaeger1, Jaeger2 On choosing fixed or random for variables: Bell, Littel, Statsxchange, AFactor Examples […]

    Like

    Dave said:
    May 4, 2016 at 3:48 pm

    What is the “hierarchy principle”?

    Like

      tiflo responded:
      May 5, 2016 at 8:20 am

      Here it refers to removing more complex terms from the model before removing their components.

      Like

Questions? Thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s