HLP Lab is grinking 2014

Posted on Updated on

And it’s that time of the year again. Time to take stock. This last year has seen an unusual amount of coming and going. It’s been great to have so many interesting folks visit or spend time in the lab.

Grads

  • Masha Fedzechkina defended her thesis, investigation what artificial language learning can tell us about the source of (some) language universals. She started her post-doc at UPenn, where she’s working with John Trueswell and Leila Gleitman. See this earlier post.
  • Ting Qian successfully defended his thesis on learning in a (subjectively) non-stationary world (primarily advised by Dick Aslin and including some work joint with me). His thesis contained such delicious and ingenious contraptions as the Hibachi Grill Process, a generalization of the Chinese Restaurant Process, based on the insight that the order of stimuli often contains information about the structure of the world so that a rational observer should take this information into account (unlike basically all standard Bayesian models of learning). Check out his site for links to papers under review. Ting’s off to start his post-doc with Joe Austerweil at Brown University.

Post-docs Read the rest of this entry »

Speech recognition: Recognizing the familiar, generalizing to the similar, and adapting to the novel

Posted on Updated on

At long last, we have finished a substantial revision of Dave Kleinschmidt‘s opus “Robust speech perception: Recognize the familiar, generalize to the similar, and adapt to the novel“. It’s still under review, but we’re excited about it and wanted to share what we have right now.

Phonetic recalibration via belief updating
Figure 1: Modeling changes in phonetic classification as belief updating. After repeatedly hearing a VOT that is ambiguous between /b/ and /p/ but occurs in a word where it can only be a /b/, listeners change their classification of that sound, calling it a /b/ much more often. We model this as a belief updating process, where listeners track the underlying distribution of cues associated with the /b/ and /p/ categories (or the generative model), and then use their beliefs about those distributions to guide classification behavior later on.

The paper builds on a large body of research in speech perception and adaptation, as well as distributional learning in other domains to develop a normative framework of how we manage to understand each other despite the infamous lack of invariance. At the core of the proposal stands the (old, but often under-appreciated) idea that variability in the speech signal is often structured (i.e., conditioned on other variables in the world) and that an ideal observer should take advantage of that structure. This makes speech perception a problem of inference under uncertainty at multiple different levels Read the rest of this entry »

more on old and new lme4

Posted on Updated on

(This is another guest post by Klinton Bicknell.)

This is an update to my previous blog post, in which I observed that post-version-1.0 versions of the lme4 package yielded worse model fits than old pre-version-1.0 versions for typical psycholinguistic datasets, and I gave instructions for installing the legacy lme4.0 package. As I mentioned there, however, lme4 is under active development, the short version of this update post is to say that it seems that the latest versions of the post-version-1.0 lme4 now yield models that are just as good, and often better than lme4.0! This seems to be due to the use of a new optimizer, better convergence checking, and probably other things too. Thus, installing lme4.0 now seems only useful in special situations involving old code that expects the internals of the models to look a certain way. Life is once again easier thanks to the furious work of the lme4 development team!

[update: Since lme4 1.1-7 binaries are now on CRAN, this paragraph is obsolete.] One minor (short-lived) snag is that the current version of lme4 on CRAN (1.1-6) is overzealous in displaying convergence warnings, and displays them inappropriately in many cases where models have in fact converged properly. This will be fixed in 1.1-7 (more info here). To avoid them for now, the easiest thing to do is probably to install the current development version of lme4 1.1-7 from github like so:

library("devtools"); install_github("lme4/lme4")

Read on if you want to hear more details about my comparisons of the versions.

Read the rest of this entry »

Congratulations to Dr. Fedzechkina

Posted on Updated on

Congratulations to Masha (a.k.a Dr. Fedzechkina) for successfully defending her thesis “Communicative Efficiency, Language Learning, and Language Universals“, jointly advised by Lissa Newport (now at Georgetown) and me. Masha’s thesis presents 7 multi-day artificial language learning studies that investigate the extent to which functional pressures guide language learning, thereby leading learners to subtly deviate from the input they receive.

Five of the experiments investigate the trade-off between word order and case-marking as a means of encoding grammatical function assignment. For a preview on these experiments, see the short report in Fedzechkina, Jaeger, and Newport (2011) and this paper under review. Two additional experiments investigate how learners trade-off animacy and case-marking (Fedzechkina, Jaeger, & Newport, 2012). Her most recent studies also show how learners trade-off uncertainty (assessed as the conditional entropy over grammatical function assignments given perfect knowledge of the grammar) and effort.

Masha2
Assorted characters from Masha’s artificial languages (from her thesis roast shirt)

Read the rest of this entry »

HLP Lab and collaborators at CMCL, ACL, and CogSci

Posted on Updated on

The summer conference season is coming up and HLP Lab, friends, and collaborators will be presenting their work at CMCL (Baltimore, joint with ACL), ACL (Baltimore), CogSci (Quebec City), and IWOLP (Geneva). I wanted to take this opportunity to give an update on some of the projects we’ll have a chance to present at these venues. I’ll start with three semi-randomly selected papers. Read the rest of this entry »

The ‘softer kind’ of tutorial on linear mixed effect regression

Posted on

I recently was pointed to this nice and very accessible tutorial on linear mixed effects regression and how to run them in R by Bodo Winter (at UC Merced). If you don’t have much or any background in this type of model, I recommend you pair it with a good conceptual introduction to these models like Gelman and Hill 2007 and perhaps some slides from our LSA 2013 tutorial.

There are a few thing I’d like to add to Bodo’s suggestions regarding how to report your results:

  1. be clear how you coded the variables since this does change the interpretation of the coefficients (the betas that are often reported). E.g. say whether you sum- or treatment-coded your factors, whether you centered or standardized continuous predictors etc. As part of this, also be clear about the direction of the coding. For example, state that you “sum-coded gender as female (1) vs. male (-1)”. Alternatively, report your results in a way that clearly states the directionality (e.g., “Gender=male, beta = XXX”).
  2. please also report whether collinearity was an issue. E.g., report the highest fixed effect correlations.

Happy reading.

 

Follow HLP lab on the English Zwitscher

Posted on

Only a few years (decades?) late, HLP lab is now zwitschering insanely uninteresting things on Twitter. You can follow us and get updates about workshops, classes, papers, code, etc. And you can zwitscher back at us and we can all be merry and follow and comment on each other until our eyes pop out or ears explode. In this spirit: @_hlplab_