- Project 1: Inference and learning during speech perception and adaptation
- Project 2: Web-based self-administered speech therapy
Although we mention preferred specializations below, applicants from any fields in the cognitive and language sciences are welcome. While candidates will join an active project, candidates are welcome/encouraged to also develop their own independent research program. In case of doubt, please contact Florian Jaeger at firstname.lastname@example.org, rather than to self-select not to apply.
Another year has passed and academic platform bombard us with end-of-year summaries. So, here are the most-read HLP Lab papers of 2015. Congratulations to Dave Kleinschmidt, who according to ResearchGate leads the 2015 HLP Lab pack with his beautiful paper on the ideal adapter framework for speech perception, adaptation, and generalization. The paper was cited 22 times in the first 6 months of being published! Well deserved, I think … as a completely neutral (and non-ideal) observer ;).
Language, Cognition, and Neuroscience just published Esteban Buz’s paper on the relation between the time course of lexical planning and the detail of articulation (as hypothesized by production ease accounts).
Several recent proposals hold that much if not all of explainable pronunciation variation (variation in the realization of a word) can be reduced to effects on the ease of lexical planning. Such production ease accounts have been proposed, for example, for effects of frequency, predictability, givenness, or phonological overlap to recently produced words on the articulation of a word. According to these account, these effects on articulation are mediated through parallel effects on the time course of lexical planning (e.g., recent research by Jennifer Arnold, Jason Kahn, Duane Watson, and others; see references in paper).
This would indeed offer a parsimonious explanation of pronunciation variation. However, the critical test for this claim is a mediation analysis, Read the rest of this entry »
At long last! It’s my great pleasure to announce the publication of the special issue on “Laboratory in the field: advances in cross-linguistic psycholinguistics”, edited by Alice Harris (UMass), Elisabeth Norcliffe (MPI, Nijmegen), and me (Rochester), in Language, Cognition, and Neuroscience. It is an exciting collection of cross-linguistic studies on language production and comprehension and it feels great to see the proofs for the whole shiny issue:
We recently submitted a research review on “Speech perception and generalization across talkers and accents“, which provides an overview of the critical concepts and debates in this domain of research. This manuscript is still under review, but we wanted to share the current version. Of couse, feedback is always welcome.
In this paper, we review the mixture of processes that enable robust understanding of speech across talkers despite the lack of invariance. These processes include (i) automatic pre-speech adjustments of the distribution of energy over acoustic frequencies (normalization); (ii) sensitivity to category-relevant acoustic cues that are invariant across talkers (acoustic invariance); (iii) sensitivity to articulatory/gestural cues, which can be perceived directly (audio-visual integration) or recovered from the acoustic signal (articulatory recovery); (iv) implicit statistical learning of talker-specific properties (adaptation, perceptual recalibration); and (v) the use of past experiences (e.g., specific exemplars) and structured knowledge about pronunciation variation (e.g., patterns of variation that exist across talkers with the same accent) to guide speech perception (exemplar-based recognition, generalization).
As requested by some, here are the slides from my 2015 CUNY Sentence Processing Conference plenary last week:
I’m posting them here for discussion purposes only. During the Q&A several interesting points were raised. For example Read the rest of this entry »
We hope to see y’all at CUNY in a few weeks. In the interest of hopefully luring to some of our posters, here’s an overview of the work we’ll be presenting. In particular, we invite our reviewers, who so boldly claimed (but did not provide references for the) triviality of our work ;), to visit our posters and help us mere mortals understand.
- Articulation and hyper-articulation
- Unsupervised and supervised learning during speech perception
- Syntactic priming and implicit learning during sentence comprehension
- Uncovering the biases underlying language production through artificial language learning
Interested in more details? Read on. And, as always, I welcome feedback. (to prevent spam, first time posters are moderated; after that your posts will always directly show)