As requested by some, here are the slides from my 2015 CUNY Sentence Processing Conference plenary last week:
I’m posting them here for discussion purposes only. During the Q&A several interesting points were raised. For example Read the rest of this entry »
A few days ago, I presented at the Gradience in Grammar workshop organized by Joan Bresnan, Dan Lassiter , and Annie Zaenen at Stanford’s CSLI (1/17-18). The discussion and audience reactions (incl. lack of reaction in some parts of the audience) prompted a few thoughts/questions about Gradience, Grammar, and to what extent the meaning of generative has survived in the modern day generative grammar. I decided to break this up into two posts. This summarizes the workshop – thanks to Annie, Dan, and Joan for putting this together!
The stated goal of the workshop was (quoting from the website):
For most linguists it is now clear that most, if not all, grammaticality judgments are graded. This insight is leading to a renewed interest in implicit knowledge of “soft” grammatical constraints and generalizations from statistical learning and in probabilistic or variable models of grammar, such as probabilistic or exemplar-based grammars. This workshop aims to stimulate discussion of the empirical techniques and linguistic models that gradience in grammar calls for, by bringing internationally known speakers representing various perspectives on the cognitive science of grammar from linguistics, psychology, and computation.
Apologies in advance for butchering the presenters’ points with my highly subjective summary; feel free to comment. Two of the talks demonstrated
In case, there’s interest, have a look at the papers to be presented at this year’s Cognitive Science meeting in Boston (July, 20th-23rd). HLP lab will be represented by two talks and four posters. The two talks will presenting work employing artificial language learning to address questions about typological generalizations:
- Masha Fedzechkina(BCS, University of Rochester) will present evidence that language learners are biased to reduced the uncertainty in the mapping from form to meaning. Her work is comparing the acquisition of miniature languages with and without case-marking in terms of to what extent learners tend to regularize or even fix variable word orders for these two types of languages (Fedzechkina, Jaeger, & Newport, 2011). Together with other recent work (e.g. by Newport, by Culbertson), this work provides evidence that language learners deviate from the input provided to them in a predictable manner. In this case, we designed the experiment to directly test the functionalist claim that language learners are biases towards acquiring languages that support communication (cf. Bates and MacWhinney’s early work).
- Hal Tily (BCS, MIT) will present work employing a novel web-based artificial language learning paradigm, in which hundreds of participants can be run within a matter of a few days. Using this paradigm, we first replicated and extended a well-known study on determiner learning (Hudson Kam and Newport, 2004) and then investigate to what extent cross-linguistically observed quantitative patterns in argument and determiner order are replicated by language learners. We discuss how this paradigm will facilitate further tests of typological generalizations (Tily, Frank, & Jaeger, 2011).