Two interesting papers on mixed models

Posted on


While searching for something else, I just came across two papers that should be of interest to folks working with mixed models.

  • Schielzeth, H. and Forstmeier, W. 2009. Conclusions beyond support: overconfident estimates in mixed models. Behavioral Ecology Volume 20, Issue 2, 416-420.  I have seen the same point being made in several papers under review and at a recent CUNY (e.g. Doug Roland’s 2009? CUNY poster). On the one hand, it should be absolutely clear that random intercepts alone are often insufficient to account for violations of independence (this is a point, I make every time I am teaching a tutorial). On the other hand, I have reviewed quite a number of papers, where this mistake was made. So, here you go. Black on white. The moral is (once again) that no statistical procedure does what you think it should do if you don’t use it the way it was intended to.
  • The second paper takes on a more advanced issue, but one that is becoming more and more relevant. How can we test whether a random effect is essentially non-necessary – i.e. that it has a variance of 0? Currently, most people conduct model comparison (following Baayen, Davidson and Bates, 2008).  But this approach is not recommended (and neither do Baayen et al recommend it) if we want to test whether all random effects can be completely removed from the model (cf. the very useful R FAQ list, which states “do not compare lmer models with the corresponding lm fits, or glmer/glm; the log-likelihoods […] include different additive terms”). This issue is taken on in Scheipl, F., Grevena, S. and Küchenhoff, H. 2008. Size and power of tests for a zero random effect variance or polynomial regression in additive and linear mixed models. Computational Statistics & Data Analysis.Volume 52, Issue 7, 3283-3299. They present power comparisons of various tests.
Advertisements

3 thoughts on “Two interesting papers on mixed models

    […] 5) Note that, to the best of my knowledge, it’s *not* legit to test whether you might not need any random effect by comparing e.g. (1 | subject) against an ordinary linear model. See, for example, the link provided on https://hlplab.wordpress.com/2011/05/31/two-interesting-papers-on-mixed-models/. […]

    Like

    Edward Flemming said:
    July 6, 2011 at 10:41 am

    Hi Florien,

    Pinheiro and Bates (2000) compare lme and lm models using anova. The issue referred to in the R faq seems to be making sure that log-likelihoods are computed in comparable ways across the two models – according to this discussion thread, the anova method from lme ensures this, I assume the same applies to lme4:
    http://r.789695.n4.nabble.com/lmm-WITHOUT-random-factor-lme4-td3384054.html
    The paper you link to is about a different issue: there is a problem with assuming that the likelihood ratio statistic follows a chi-square distribution when ‘the tested parameter is on the boundary of the parameter space’ as in a test for zero variance of a random effect (even for one random effect, not just testing whether all are zero). For small data sets, a parametric bootstrap likelihood ratio test as described in Faraway’s ‘Extending the Linear Model with R’ p.160 is really easy to implement with simulate() and refit() from lme4 (see documentation for simulate-mer for sample code).

    best,
    Edward

    Like

      tiflo said:
      July 6, 2011 at 9:34 pm

      Hi Edward,

      thanks for posting this. The link is very useful. I hadn’t been aware of this discussion. I am a bit skeptical though since Bates never replied to this and since the R faq explicitly says that the likelihoods are NOT commensurate. I have to look into this a bit more. I was aware that anova does the right thing in terms of using method = “ML” for the comparison of two lme models, but I didn’t think it fixed the problem with the likelihood terms. I am too busy right now but I should look this up some more.

      The other paper is relevant in that it provides a different way to assess whether the variance of a random effect is zero (in which case it is not required). The alternative method is provided for exactly the reason you mention. I think our community currently accepts model comparison as a sufficiently good way to assess whether a random effect is required (given the size of our data sets). But as this paper shows, this is an issue of ongoing research. That’s why I linked the paper. Notice that the method they propose provides a way to test whether any random effects are required, which is what my post was about.

      Florian

      Like

Questions? Thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s