A few days ago, I presented at the Gradience in Grammar workshop organized by Joan Bresnan, Dan Lassiter , and Annie Zaenen at Stanford’s CSLI (1/17-18). The discussion and audience reactions (incl. lack of reaction in some parts of the audience) prompted a few thoughts/questions about Gradience, Grammar, and to what extent the meaning of generative has survived in the modern day generative grammar. I decided to break this up into two posts. This summarizes the workshop – thanks to Annie, Dan, and Joan for putting this together!
The stated goal of the workshop was (quoting from the website):
For most linguists it is now clear that most, if not all, grammaticality judgments are graded. This insight is leading to a renewed interest in implicit knowledge of “soft” grammatical constraints and generalizations from statistical learning and in probabilistic or variable models of grammar, such as probabilistic or exemplar-based grammars. This workshop aims to stimulate discussion of the empirical techniques and linguistic models that gradience in grammar calls for, by bringing internationally known speakers representing various perspectives on the cognitive science of grammar from linguistics, psychology, and computation.
Apologies in advance for butchering the presenters’ points with my highly subjective summary; feel free to comment. Two of the talks demonstrated
It’s great to be able to announce that the Center of Language Sciences at the University of Rochester is about to grow. Two new faculty, Steven Piantadosi and Celeste Kidd, will join our Brain and Cognitive Science department, starting in the Summer of 2014. Chances are you know both of them, but here’s a short intro.
Celeste’s research focuses on decision making, attention, and language development in infants and children. Her recent work includes research on the trade-off between too much and too little information/surprisal in learning and how infants seem to be striving for a middle ground (the ‘Goldilocks effect’). She also recently revisited the well-known marshmallow study, putting an intriguing new twist on it: her study suggests that kids can prioritize long-term over short-term rewards, if they have evidence that the long-term rewards will reliably be delivered (see the paper).
Steven’s research focuses on probabilistic inference to learn and process language. His thesis investigated probabilistic models of semantic acquisition — how complex thoughts are acquired through composition out of simpler thought elements (thesis). He has also authored several beautiful papers on how communicative pressures (formalized in terms of information theory and probabilistic inference) are cross-linguistically reflected in the phonological structure of the mental lexicon. Another line of his research focuses on recursion – see, for example, his recent work on recursion in Piraha (talk).
Together they won the 2010 Computational Modeling Prize (Perception/Action) of the Cognitive Science Society.