As requested by some, here are the slides from my 2015 CUNY Sentence Processing Conference plenary last week:
I’m posting them here for discussion purposes only. During the Q&A several interesting points were raised. For example Read the rest of this entry »
A few days ago, I presented at the Gradience in Grammar workshop organized by Joan Bresnan, Dan Lassiter , and Annie Zaenen at Stanford’s CSLI (1/17-18). The discussion and audience reactions (incl. lack of reaction in some parts of the audience) prompted a few thoughts/questions about Gradience, Grammar, and to what extent the meaning of generative has survived in the modern day generative grammar. I decided to break this up into two posts. This summarizes the workshop – thanks to Annie, Dan, and Joan for putting this together!
The stated goal of the workshop was (quoting from the website):
For most linguists it is now clear that most, if not all, grammaticality judgments are graded. This insight is leading to a renewed interest in implicit knowledge of “soft” grammatical constraints and generalizations from statistical learning and in probabilistic or variable models of grammar, such as probabilistic or exemplar-based grammars. This workshop aims to stimulate discussion of the empirical techniques and linguistic models that gradience in grammar calls for, by bringing internationally known speakers representing various perspectives on the cognitive science of grammar from linguistics, psychology, and computation.
Apologies in advance for butchering the presenters’ points with my highly subjective summary; feel free to comment. Two of the talks demonstrated
Congratulations to Ting Qian and Dave Kleinschmidt, both students in the Brain and Cognitive Science Program at Rochester and members of HLP Lab, for being awarded a Google Travel Grant to CogSci 2012 in Sapporo, Japan, where they will present their work. which centers around implicit statistical learning and adaptation during language acquisition and processing:
- Qian, T., Reeder, P.A., Aslin, R.N., Tenenbaum, J.B., and Newport, E.L. 2012. Exploring the Role of Representation in Models of Grammatical Category Acquisition. In TBA (eds.) Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci12), TBA. Austin, TX: Cognitive Science Society.
- Kleinschmidt, D. and Jaeger, T. F. 2012. A continuum of phonetic adaptation: Evaluating an incremental belief-updating model of recalibration and selective adaptation. In TBA (eds.) Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci12), TBA. Austin, TX: Cognitive Science Society.
- Kleinschmidt, D., Fine, A. B., and Jaeger, T. F. 2012. A belief-updating model of adaptation and cue combination in syntactic comprehension. In TBA (eds.) Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci12), TBA. Austin, TX: Cognitive Science Society.
And, thank you, dear Google.
Better late than never: Congratulations to Dave Kleinschmidt for winning the “Student Talk Prize” at the 2011 meeting of Architecture and Mechanisms of Language Processing in Paris, France. If you want to learn more about’s Dave’s work on A Bayesian belief updating model of phonetic recalibration and selective adaptation either have a look at this AMLaP abstract or read Dave’s short ACL paper on some the findings presented at the 2011 Cognitive Modeling and Computational Linguistics workshop in Portland, Oregon (here’s a link to the full proceedings).
If you’re interested in this line of work, you might also enjoy reading Morgan Sonderegger and Alan Yu’s 2010 CogSci paper on A rational account of perceptual compensation for coarticulation, which we learned about recently.
Congratulations to Dave Kleinschmidt for receiving a NSF Graduate Fellowship for his work on computational models of phonological category acquisition and phonological adaptation. Congratulations also to Esteban Buz for an honorable mention and excellent reviews for his work on using the iterative artificial language learning paradigm to study language change and the morpho-syntactic level.
Dave just finished a write-up of the first steps in his research on phonological adaptation. As soon as the reviewer comments are in, we will post the paper for comments on academia.edu.
Update 11/30/11: Dave’s paper is now available on academia.edu
- Kleinschmidt, D. and Jaeger, T. F. 2011. A Bayesian belief updating model of phonetic recalibration and selective adaptation. Proceedings of the Cognitive Modeling and Computational Linguistics Workshop at ACL, Portland, OR, June 23rd, 10-19.
Congratulations to Dave for receiving an honorable mention for the Best Student Paper award at the ACL workshop. An update on this paper won the Best Student Talk at AMLaP 2011, Paris, France.
Congratulations to Dave Kleinschmidt and Masha Fedzechkina for being awarded LSA stipends to attend the Linguistics Society of America’s 2011 Summer Institute at UC Boulder!
My sabbatical it’s nearing its end (shiver). So, there’s much to catch up on. HLP Lab has once again grown and shrunk, leading to grinking report #2 (cf. #1):
First a farewell to the lost ones:
- Austin Frank has graduated with an absolutely wonderful thesis (work with Mike Tanenhaus and Dick Aslin) on perturbation. In his studies, Austin manipulates what participants think they are saying by changing the first formants of the acoustic signal produced by them up or down within about 14msecs to play it back to them over head sets, thereby creating the misleading perception of having mispronounced the word (the ‘perturbation’). I won’t go into the gory technical challenges Austin had to overcome to run these studies. His thesis work provides evidence that (a) speakers adapt their pronunciation partly based on auditory feedback about their own production, (b) these adaptations are pretty rapid, (c) they are sensitive to the structure of the phonological lexicon. For example, speakers are less likely to shift their production into a corner of the phonological space that is already occupied by other words in the language …. (yeah, cool, right?). He’s currently holding a post-doc position at Haskins and UConn, working with Jim Magnuson.
and a welcome to the newbies:
- Esteban Buz has joined us from Johns Hopkins where worked with Robert Frank and Kerry LeDoux. It seems he has chosen some questions on functional explanations to language change as his first research topic, which he will explore using iterative learning studies. In particular, he’s interested in how changes over time are, in part, a reflection of acquisition and processing biases.
- David Kleinschmidt has joined the lab after a year at Maryland. He did is undergraduate at Williams College with stints at Emory and the University of Maine. He’s interested in computational modeling and speech perception, and specifically in developing models of how phonetic categories are learned and deployed that are plausible from linguistic, computational, neural, and developmental perspectives. Dave’s also working with Dick Aslin and Alex Pouget.