linguistic representations

A few reflections on “Gradience in Grammar”

Posted on Updated on

In my earlier post I provided a summary of a workshop on Gradience in Grammar last week at Stanford. The workshop prompted many interesting discussion, but here I want to talk about an (admittedly long ongoing) discussion it didn’t prompt. Several of the presentations at the workshop talked about prediction/expectation and how they are a critical part of language understanding. One implication of these talks is that understanding the nature and structure of our implicit knowledge of linguistic distributions (linguistic statistics) is crucial to advancing linguistics. As I was told later, there were, however, a number of people in the audience who thought that this type of data doesn’t tell us anything about linguistics and, in particular, grammar (unfortunately, this opinion was expressed outside the Q&A session and not towards the people giving the talks, so that it didn’t contribute to the discussion). Read the rest of this entry »

Advertisements

“Gradience in Grammar” workshop at CSLI, Stanford (#gradience2014)

Posted on

A few days ago, I presented at the Gradience in Grammar workshop organized by Joan Bresnan, Dan Lassiter , and Annie Zaenen at Stanford’s CSLI (1/17-18). The discussion and audience reactions (incl. lack of reaction in some parts of the audience) prompted a few thoughts/questions about Gradience, Grammar, and to what extent the meaning of generative has survived in the modern day generative grammar. I decided to break this up into two posts. This summarizes the workshop – thanks to Annie, Dan, and Joan for putting this together!

The stated goal of the workshop was (quoting from the website):

For most linguists it is now clear that most, if not all, grammaticality judgments are graded. This insight is leading to a renewed interest in implicit knowledge of “soft” grammatical constraints and generalizations from statistical learning and in probabilistic or variable models of grammar, such as probabilistic or exemplar-based grammars. This workshop aims to stimulate discussion of the empirical techniques and linguistic models that gradience in grammar calls for, by bringing internationally known speakers representing various perspectives on the cognitive science of grammar from linguistics, psychology, and computation. 

Apologies in advance for butchering the presenters’ points with my highly subjective summary; feel free to comment. Two of the talks demonstrated

Read the rest of this entry »

Our Special Issue is coming out: Parsimony and Redundancy in Models of Language

Posted on Updated on

It’s almost done! After about two years of work, our Special Issue on Parsimony and Redundancy in Models of Language (Wiechmann, Kerz, Snider & Jaeger 2013)  is about to come out in Language and Speech, Vol 56(3). The brunt of the editorial work in putting this together was mastered by Daniel Wiechman, who just started his new position at the University of Amsterdam, and Elma Kerz, in the Department of Anglistik at the University of Aachen.

Cover of Special Issue on Parsimony and Redundancy in Models of Language (in Language and Speech)
Cover of Special Issue on Parsimony and Redundancy in Models of Language (in Language and Speech)

I am excited about this Special Issue, which –I think– brings together a variety of positions on representational redundancy and parsimony in linguistic theory building as well as the role of redundancy in the development of language over time. Some contributions discuss different computational and representational architectures, other contributions test these theories or investigate specific assumptions about the nature of linguistic representations. Read the rest of this entry »

Fine grained linguistic knowledge, CUNY poster

Posted on

And here is one more poster from CUNY. This one is work by Robin Melnick at Stanford together with Tom Wasow. Robin ran forced-choice and 100-point-preference norming experiments on that-mentioning in relative and complement clauses to investigate the extent to which the factors that affect processing correlate with the factors affecting acceptability judgments. Going beyond previous work, he actually directly correlates the effect sizes of individual predictors in the processing and acceptability models. All experiments were run both in the lab and over the web using MechanicalTurk.