CUNY 2015 plenary

Posted on Updated on

As requested by some, here are the slides from my 2015 CUNY Sentence Processing Conference plenary last week:

This slideshow requires JavaScript.

I’m posting them here for discussion purposes only. During the Q&A several interesting points were raised. For example Read the rest of this entry »

HLP Lab at CUNY 2015

Posted on Updated on

We hope to see y’all at CUNY in a few weeks. In the interest of hopefully luring to some of our posters, here’s an overview of the work we’ll be presenting. In particular, we invite our reviewers, who so boldly claimed (but did not provide references for the) triviality of our work ;), to visit our posters and help us mere mortals understand.

  • Articulation and hyper-articulation
  • Unsupervised and supervised learning during speech perception
  • Syntactic priming and implicit learning during sentence comprehension
  • Uncovering the biases underlying language production through artificial language learning

Interested in more details? Read on. And, as always, I welcome feedback. (to prevent spam, first time posters are moderated; after that your posts will always directly show)

Read the rest of this entry »

HLP Lab and collaborators at CMCL, ACL, and CogSci

Posted on Updated on

The summer conference season is coming up and HLP Lab, friends, and collaborators will be presenting their work at CMCL (Baltimore, joint with ACL), ACL (Baltimore), CogSci (Quebec City), and IWOLP (Geneva). I wanted to take this opportunity to give an update on some of the projects we’ll have a chance to present at these venues. I’ll start with three semi-randomly selected papers. Read the rest of this entry »

Presentation at CNS symposium on “Prediction, adaptation and plasticity of language processing in the adult brain”

Posted on Updated on

Earlier this week, Dave Kleinschmidt and I gave a presentation as part of a mini-symposium at Cognitive Neuroscience Conference  on “Prediction, adaptation and plasticity of language processing in the adult brain” organized by Gina Kuperberg.  For this symposium we were tasked to address the following questions:

  1. What is prediction and why do we predict?
  2. What is adaptation and why do we adapt?
  3. How do prediction and adaptation relate?

Although we address these questions in the context of language processing, most of our points are pretty general. We aim to provide intuitions about the notions of distribution, prediction, distributional/statistical learning and adaptation. We walked through examples of belief-updating, intentionally keeping our presentation math-free. Perhaps some of the slides are of interest to some of you, so I attached them below. A more in-depth treatment of these questions is also provided in Kleinschmidt & Jaeger (under review, available on request).

Comments welcome. (sorry – some of the slides look strange after importing them and all the animations got lost but I think they are all readable).

This slideshow requires JavaScript.

It was great to see these notions discussed and related to ERP, MEG, and fMRI research in the three other presentations of the symposium by Matt Davis, Kara Federmeier and Eddy Wlotko, and Gina Kuperberg. You can read their abstracts following the link to the symposium I included above.

Another example of recording spoken productions over the web

Posted on Updated on

A few days ago, I posted a summary of some recent work on syntactic alignment with Kodi Weatherholtz and Kathryn Campell-Kibler (both at The Ohio State University), in which we used the WAMI interface to collect speech data for research on language production over Amazon’s Mechanical Turk.

JaegerGrimshaw_poster_v3-final corrected (after print)
Jaeger and Grimshaw (2013). Poster presented at AMLaP, Marseilles, France.

Read the rest of this entry »

Join me at the 15th Texas Linguistic Society conference?

Posted on Updated on

I’ll be giving a plenary presentation at the 15th Texas Linguistic Society conference to be held in October in Austin, TX. Philippe Schlenker from NYU and David Beaver from Austin will be giving plenaries, too. The special session will be on the “importance of experimental evidence in theories of syntax and semantics, and focus on research that highlights the unique advantages of the experimental environment, as opposed to other sources of data” (from their website). Submit an abstract before May 1st and I see you there.

“Gradience in Grammar” workshop at CSLI, Stanford (#gradience2014)

Posted on

A few days ago, I presented at the Gradience in Grammar workshop organized by Joan Bresnan, Dan Lassiter , and Annie Zaenen at Stanford’s CSLI (1/17-18). The discussion and audience reactions (incl. lack of reaction in some parts of the audience) prompted a few thoughts/questions about Gradience, Grammar, and to what extent the meaning of generative has survived in the modern day generative grammar. I decided to break this up into two posts. This summarizes the workshop – thanks to Annie, Dan, and Joan for putting this together!

The stated goal of the workshop was (quoting from the website):

For most linguists it is now clear that most, if not all, grammaticality judgments are graded. This insight is leading to a renewed interest in implicit knowledge of “soft” grammatical constraints and generalizations from statistical learning and in probabilistic or variable models of grammar, such as probabilistic or exemplar-based grammars. This workshop aims to stimulate discussion of the empirical techniques and linguistic models that gradience in grammar calls for, by bringing internationally known speakers representing various perspectives on the cognitive science of grammar from linguistics, psychology, and computation. 

Apologies in advance for butchering the presenters’ points with my highly subjective summary; feel free to comment. Two of the talks demonstrated

Read the rest of this entry »

now, this is broader impact

Posted on Updated on

This is federal funds well-spent: after the CUNY Sentence Processing Conference, Daniel Pontillo reaches out to the broader public and explains –to a captive audience of night owls at packed IHOP– how eye-tracking data allows us to test how we process the world (the poster is on implicit naming, or rather the lack thereof, in visual world experiments). The presentation was a resounding success. One member of an underrepresented minority was likely recruited for a research career in the cognitive sciences. A brawl that later ensued on the same premises stands in no relation to this presentation, in which only waffles were harmed. Science never stops. We are grateful for all feedback received from IHOPers during the poster presentation.

(Disclaimer: federal funds were only used to print the poster, which was first presented at the Sentence Processing Conference.)

Dan Pontillo gives an impromptu poster presentation at the IHOP around 2-something a.m., Columbia, S.C.

Language is shaped by brain’s desire for clarity and ease

Posted on Updated on

Congratulations to Masha Fedzechkina on her article on a bias for efficient information transfer during language learning that has just appeared in the Proceedings of the National Academy of Sciences (link to article).

Here’s some news coverage

More to come soon.

Errata: We are sorry that in our paper we forgot to acknowledge the help of three undergraduate research assistants, Andy Wood, Irene Minkina, and Cassandra Donatelli, in preparing the video animations used during our artificial language learning task.

congratulations to Ting Qian and Dave Kleinschmidt

Posted on Updated on

Congratulations to Ting Qian and Dave Kleinschmidt, both students in the Brain and Cognitive Science Program at Rochester and members of HLP Lab, for being awarded a Google Travel Grant to CogSci 2012 in Sapporo, Japan, where they will present their work. which centers around implicit statistical learning and adaptation during language acquisition and processing:

And, thank you, dear Google.

Word order and case marking in language acquisition and processing (LSA poster and CogSci paper)

Posted on Updated on

We presented the results of our artificial language learning study on the use of case-marking and word order as cues in processing and learning at the LSA annual meeting. This is work done with Florian Jaeger and Elissa Newport. We investigated whether functional pressures (e.g., ambiguity reduction) operate during language acquisition, biasing learners to (subtly) deviate from the input they receive. Our results suggest that language learners indeed have a bias to reduce uncertainty (or ambiguity) in the input language: The learners are more likely to fix the word order if a language does not have case. See the image below for the details of the study or download the poster as a pdf here. Feedback welcome!

Update 11/29/11: This work was published in the 2011 CogSci Proceedings as


Posted on

Judith Degen, Masha Fedzechkina and I just came back from Ohio State’s linguistics departments, where we had a great time presenting and discussing our work. Masha gave her first talk ever, presenting her work within the artificial language learning paradigm on functional biases on acquisition (an extension of her LSA poster, soon to be posted here). Judith gave a wonderful guest lecture for Shari Speer’s introduction to psycholinguistics. She talked about scalar implicature and her work with Mike Tanenhaus on this topic. Since even I got it (and I am well-known to be pragmatically challenged), I can highly recommend her slides on scalar implicature processing (beware it’s a monster file – click and go grab a coffee).

Thanks to everyone there for great and insightful conversations and for organizing this. I was particularly excited to hear about potential applications of Uniform Information Density to natural language generation (please keep me posted!). Oh, and extra big thanks to Judith Tonhauser and her fat white cat.

HLP Lab at the LSA and congratulations to Judith Degen and Masha Fedzechkina

Posted on

Congratulations to Judith Degen and Masha Fedzechkina for having their two abstracts be among only twelve selected to be “media-worthy” by LSA reviewers:

  • Degen, J. and Jaeger, T. F. 2011.  Speakers sacrifice some (of the) precision in conveyed meaning to accommodate robust communication. Talk to be presented at the 2011 Meeting of the LSA.
    • Session: Pragmatics II  31
    • Room: Le Batea
    • Time: Friday 2pm

The process of encoding an intended meaning into a linguistic utterance is well-known to be affected by production pressures. We present corpus data suggesting that the choice between even two seemingly non-meaning-equivalent forms as in (1a) and (1b) can be affected by speakers’ preference to distribute information uniformly across the linguistic signal (Uniform Information Density (UID), Jaeger 2006). This suggests that even when two forms do not encode the same (but a similar enough) message, speakers may sacrifice precision in meaning for increased processing efficiency.

(1a) Alex ate some chard.
(1b) Alex ate some of the chard

  • Fedzechkina, M., Jaeger. T. F. , and Newport, E. 2011. Word order and case marking in language acquisition and processing. Poster to be presented at the 2011 Meeting of the LSA.
    • Session: Language Acquisition/Psycholinguistics/Syntax
    • Room: Grand Ballroom Foyer
    • Time: 9:00 – 10:30 AM.

To understand a sentence, comprehenders must identify its actor and patient. In principle, these relationships can be signaled using a single cue, but most languages employ several redundant cues, including word order and case marking. In artificial language learning experiments we investigate word order and case as cues in processing and learning. In languages without case marking, learners regularize word order; but when case marking is present, it is favored and limits word order regularization. Case-marking comes with a disadvantage: it is more complex to acquire. But the present results suggest that this may be outweighed by clarity for processing.

Read the rest of this entry »

Fine grained linguistic knowledge, CUNY poster

Posted on

And here is one more poster from CUNY. This one is work by Robin Melnick at Stanford together with Tom Wasow. Robin ran forced-choice and 100-point-preference norming experiments on that-mentioning in relative and complement clauses to investigate the extent to which the factors that affect processing correlate with the factors affecting acceptability judgments. Going beyond previous work, he actually directly correlates the effect sizes of individual predictors in the processing and acceptability models. All experiments were run both in the lab and over the web using MechanicalTurk.

Psycholinguistics in the field 2, CUNY Poster

Posted on Updated on

And here is one more poster on Yucatec, following Lindsay’s example. This is work by Lis Norcliffe, who just graduated from Stanford and join the MPI in Nijmegen. Her thesis work is on the (possibly resumptive) morphology discussed in this poster and the experiments were part of that thesis, too. You’ll find effects of definiteness and dependency length, which we investigated since they (in our view) provide evidence that this morphological reduction alternation is affected by both a preference for uniform information density and a preference for dependency minimization. Feedback welcome.