If you in the Montreal area, consider joining us for a Workshop on Ordinary and Multilevel Models to be held 5/3-4 at McGill and organized by Michael Wagner, Aparna Nadig, and Kris Onishi. The workshop will include the usual intros to linear regression, linear mixed models, logistic regression, and mixed logit models. We will also discuss common issues and solutions to regression modeling. Additionally, we will have a couple of special area lectures/tutorials:
- Maureen Gillespie (Northeastern) will talk about different ways to code your variables and how that relates to the specific hypotheses you’re testing.
- Peter Graff (MIT) will give a tutorial on logistic regression, specifically to test linguistic theories. In all likelihood, he will also sing. Which relates to the previous post, because he likes to sing about OT.
So, join us! I think there also will be a party =). Below is the full invitation (some details may change). Read the rest of this entry »
Below, I’ve posted some code that
- generates an artificial data set
- creates both treatment (a.k.a. dummy) and sum (a.k.a. contrast or ANOVA-style) coding for the data set
- compares the lmer output for the two coding systems
- suggests a way to test simple effects in a linear mixed model
Mostly though the code is just meant as a starting point for people who want to play with a balanced (non-Latin square design) data set to understand the consequences of coding your predictor variables in different ways.
While Victor Kuperman and I are preparing our slides for WOMM, I’ve been thinking about how to visualize the process from input variables to a full model. Even though it involves many steps that hugely depend on the type of regression model, which in turn depends on the type of outcome (dependent) variable, there are a number of steps that one always needs to go through if we want interpretable coefficient estimates (as well as unbiased standard error estimates for those coefficients).