We hope to see y’all at CUNY in a few weeks. In the interest of hopefully luring to some of our posters, here’s an overview of the work we’ll be presenting. In particular, we invite our reviewers, who so boldly claimed (but did not provide references for the) triviality of our work ;), to visit our posters and help us mere mortals understand.
- Articulation and hyper-articulation
- Unsupervised and supervised learning during speech perception
- Syntactic priming and implicit learning during sentence comprehension
- Uncovering the biases underlying language production through artificial language learning
Interested in more details? Read on. And, as always, I welcome feedback. (to prevent spam, first time posters are moderated; after that your posts will always directly show)
In a recent PLoS one article, Healey, Purver, and Howes (2014) investigate syntactic priming in conversational speech, both within speakers and across speakers. Healey and colleagues follow Reitter et al (2006) in taking a broad-coverage approach to the corpus-based study of priming. Rather than to focus on one or a few specific structures, Healey and colleagues assess lexical and structural similarity within and across speakers. The paper concludes with the interesting claim that there is no evidence for syntactic priming within speaker and that alignment across speakers is actually less than expected by chance once lexical overlap is controlled for. Given more than 30 years of research on syntactic priming, this is a rather interesting claim. As some folks have Twitter-bugged me (much appreciated!), I wanted to summarize some quick thoughts here. Apologies in advance for the somewhat HLP-lab centric view. If you know of additional studies that seem relevant, please join the discussion and post. Of course, Healey and colleagues are more than welcome to respond and correct me, too.
First, the claim by Healey and colleagues that “previous work has not tested for general syntactic repetition effects in ordinary conversation independently of lexical repetition” (Healey et al 2014, abstract) isn’t quite accurate.
And it’s that time of the year again. Time to take stock. This last year has seen an unusual amount of coming and going. It’s been great to have so many interesting folks visit or spend time in the lab.
- Masha Fedzechkina defended her thesis, investigation what artificial language learning can tell us about the source of (some) language universals. She started her post-doc at UPenn, where she’s working with John Trueswell and Leila Gleitman. See this earlier post.
- Ting Qian successfully defended his thesis on learning in a (subjectively) non-stationary world (primarily advised by Dick Aslin and including some work joint with me). His thesis contained such delicious and ingenious contraptions as the Hibachi Grill Process, a generalization of the Chinese Restaurant Process, based on the insight that the order of stimuli often contains information about the structure of the world so that a rational observer should take this information into account (unlike basically all standard Bayesian models of learning). Check out his site for links to papers under review. Ting’s off to start his post-doc with Joe Austerweil at Brown University.
Post-docs Read the rest of this entry »
Speech recognition: Recognizing the familiar, generalizing to the similar, and adapting to the novel
At long last, we have finished a substantial revision of Dave Kleinschmidt‘s opus “Robust speech perception: Recognize the familiar, generalize to the similar, and adapt to the novel“. It’s still under review, but we’re excited about it and wanted to share what we have right now.
The paper builds on a large body of research in speech perception and adaptation, as well as distributional learning in other domains to develop a normative framework of how we manage to understand each other despite the infamous lack of invariance. At the core of the proposal stands the (old, but often under-appreciated) idea that variability in the speech signal is often structured (i.e., conditioned on other variables in the world) and that an ideal observer should take advantage of that structure. This makes speech perception a problem of inference under uncertainty at multiple different levels Read the rest of this entry »
Congratulations to Masha (a.k.a Dr. Fedzechkina) for successfully defending her thesis “Communicative Efficiency, Language Learning, and Language Universals“, jointly advised by Lissa Newport (now at Georgetown) and me. Masha’s thesis presents 7 multi-day artificial language learning studies that investigate the extent to which functional pressures guide language learning, thereby leading learners to subtly deviate from the input they receive.
Five of the experiments investigate the trade-off between word order and case-marking as a means of encoding grammatical function assignment. For a preview on these experiments, see the short report in Fedzechkina, Jaeger, and Newport (2011) and this paper under review. Two additional experiments investigate how learners trade-off animacy and case-marking (Fedzechkina, Jaeger, & Newport, 2012). Her most recent studies also show how learners trade-off uncertainty (assessed as the conditional entropy over grammatical function assignments given perfect knowledge of the grammar) and effort.
The summer conference season is coming up and HLP Lab, friends, and collaborators will be presenting their work at CMCL (Baltimore, joint with ACL), ACL (Baltimore), CogSci (Quebec City), and IWOLP (Geneva). I wanted to take this opportunity to give an update on some of the projects we’ll have a chance to present at these venues. I’ll start with three semi-randomly selected papers. Read the rest of this entry »
I recently was pointed to this nice and very accessible tutorial on linear mixed effects regression and how to run them in R by Bodo Winter (at UC Merced). If you don’t have much or any background in this type of model, I recommend you pair it with a good conceptual introduction to these models like Gelman and Hill 2007 and perhaps some slides from our LSA 2013 tutorial.
There are a few thing I’d like to add to Bodo’s suggestions regarding how to report your results:
- be clear how you coded the variables since this does change the interpretation of the coefficients (the betas that are often reported). E.g. say whether you sum- or treatment-coded your factors, whether you centered or standardized continuous predictors etc. As part of this, also be clear about the direction of the coding. For example, state that you “sum-coded gender as female (1) vs. male (-1)”. Alternatively, report your results in a way that clearly states the directionality (e.g., “Gender=male, beta = XXX”).
- please also report whether collinearity was an issue. E.g., report the highest fixed effect correlations.
Only a few years (decades?) late, HLP lab is now zwitschering insanely uninteresting things on Twitter. You can follow us and get updates about workshops, classes, papers, code, etc. And you can zwitscher back at us and we can all be merry and follow and comment on each other until our eyes pop out or ears explode. In this spirit: @_hlplab_
Presentation at CNS symposium on “Prediction, adaptation and plasticity of language processing in the adult brain”
Earlier this week, Dave Kleinschmidt and I gave a presentation as part of a mini-symposium at Cognitive Neuroscience Conference on “Prediction, adaptation and plasticity of language processing in the adult brain” organized by Gina Kuperberg. For this symposium we were tasked to address the following questions:
- What is prediction and why do we predict?
- What is adaptation and why do we adapt?
- How do prediction and adaptation relate?
Although we address these questions in the context of language processing, most of our points are pretty general. We aim to provide intuitions about the notions of distribution, prediction, distributional/statistical learning and adaptation. We walked through examples of belief-updating, intentionally keeping our presentation math-free. Perhaps some of the slides are of interest to some of you, so I attached them below. A more in-depth treatment of these questions is also provided in Kleinschmidt & Jaeger (under review, available on request).
Comments welcome. (sorry – some of the slides look strange after importing them and all the animations got lost but I think they are all readable).
It was great to see these notions discussed and related to ERP, MEG, and fMRI research in the three other presentations of the symposium by Matt Davis, Kara Federmeier and Eddy Wlotko, and Gina Kuperberg. You can read their abstracts following the link to the symposium I included above.
Post-doctoral position available (speech perception, language comprehension, implicit distributional learning, inference under uncertainty, hierarchical predictive systems)
The Human Language Processing (HLP) Lab at the University of Rochester is looking for a post-doctoral researcher interested in speech perception and adaptation. Possible start dates for this 1-3 year position range from mid August 2014 to mid June 2015 (the current post-doctoral researcher funded under this grant will leave HLP lab in late August to start a tenure-track position in Psychology at the University of Pittsburgh). International students are welcome to apply (NIH research grants are not limited to nationals).
We will start reviewing applications mid-June 2014 though later submissions are welcome. Applications should contain (1) a cover letter clearly indicated possible start dates, (2) a CV, (3) research statement detailing qualifications and research interests, and (4) 2 or more letters of recommendation. Applications and letters should be emailed to Kathy Corser (firstname.lastname@example.org), subject line “application for post-doc position (HLP Lab)”.
This is an NIH funded project (NIHCD R01 HD075797), currently scheduled to end in 2018. The project is a collaboration between Florian Jaeger (PI), Mike Tanenhaus (co-PI), Robbie Jacobs and Dick Aslin. We are interested in Read the rest of this entry »
… seeks to understand the remarkable efficiency of language comprehension, using the tools of probability theory and statistical decision theory as explanatory frameworks. My work suggests that we achieve communicative efficiency by utilizing rich, structured probabilistic information about language: leveraging linguistic redundancy to fill in details absent from the perceptual signal, to spend less time processing more frequent material, and to make predictions about language material not yet encountered.
A few days ago, I posted a summary of some recent work on syntactic alignment with Kodi Weatherholtz and Kathryn Campell-Kibler (both at The Ohio State University), in which we used the WAMI interface to collect speech data for research on language production over Amazon’s Mechanical Turk.
The first step in our OSU-Rochester collaboration on socially-mediated syntactic alignment has been submitted a couple of weeks ago. Kodi Weatherholtz in Linguistics at The Ohio State University took the lead in this project together with Kathryn Campbell-Kibler (same department) and me.
We collected spoken picture descriptions via Amazon’s crowdsourcing platform Mechanical Turk to investigate how social attitude towards an interlocutor and conflict management styles affected syntactic priming. Our paradigm combines Read the rest of this entry »
Thanks to Scott Jackson, Daniel Ezra Johnson, David Morris, Michael Shvartzman, and Nathanial Smith for the recommendations and pointers to the packages mentioned below.
- The maps, mapsextra, and maptools packages provide data and tools to plot world, US, and a variety of regional maps (see also mapproj and mapdata). This, combined with ggplot2 is also what we used in Jaeger et al., (2011, 2012) to plot distributions over world maps. Here’s an example from ggplot2 with maps.
I’ll be giving a plenary presentation at the 15th Texas Linguistic Society conference to be held in October in Austin, TX. Philippe Schlenker from NYU and David Beaver from Austin will be giving plenaries, too. The special session will be on the “importance of experimental evidence in theories of syntax and semantics, and focus on research that highlights the unique advantages of the experimental environment, as opposed to other sources of data” (from their website). Submit an abstract before May 1st and I see you there.