Presentation at CNS symposium on “Prediction, adaptation and plasticity of language processing in the adult brain”

Posted on Updated on

Earlier this week, Dave Kleinschmidt and I gave a presentation as part of a mini-symposium at Cognitive Neuroscience Conference  on “Prediction, adaptation and plasticity of language processing in the adult brain” organized by Gina Kuperberg.  For this symposium we were tasked to address the following questions:

  1. What is prediction and why do we predict?
  2. What is adaptation and why do we adapt?
  3. How do prediction and adaptation relate?

Although we address these questions in the context of language processing, most of our points are pretty general. We aim to provide intuitions about the notions of distribution, prediction, distributional/statistical learning and adaptation. We walked through examples of belief-updating, intentionally keeping our presentation math-free. Perhaps some of the slides are of interest to some of you, so I attached them below. A more in-depth treatment of these questions is also provided in Kleinschmidt & Jaeger (under review, available on request).

Comments welcome. (sorry – some of the slides look strange after importing them and all the animations got lost but I think they are all readable).

This slideshow requires JavaScript.

It was great to see these notions discussed and related to ERP, MEG, and fMRI research in the three other presentations of the symposium by Matt Davis, Kara Federmeier and Eddy Wlotko, and Gina Kuperberg. You can read their abstracts following the link to the symposium I included above.

Another example of recording spoken productions over the web

Posted on Updated on

A few days ago, I posted a summary of some recent work on syntactic alignment with Kodi Weatherholtz and Kathryn Campell-Kibler (both at The Ohio State University), in which we used the WAMI interface to collect speech data for research on language production over Amazon’s Mechanical Turk.

JaegerGrimshaw_poster_v3-final corrected (after print)
Jaeger and Grimshaw (2013). Poster presented at AMLaP, Marseilles, France.

Read the rest of this entry »

Socially-mediated syntactic alignment

Posted on Updated on

The first step in our OSU-Rochester collaboration on socially-mediated syntactic alignment has been submitted a couple of weeks ago. Kodi Weatherholtz in Linguistics at The Ohio State University took the lead in this project together with Kathryn Campbell-Kibler (same department) and me.

Welcome screen with sound  check from our web-based speech recording experiment.
Welcome screen with sound check from our web-based speech recording experiment.

We collected spoken picture descriptions via Amazon’s crowdsourcing platform Mechanical Turk to investigate how social attitude towards an interlocutor and conflict management styles affected syntactic priming. Our paradigm combines Read the rest of this entry »

Perspective paper on second (and third and …) language learning as hierarchical inference

Posted on Updated on

We’ve just submitted a perspective paper on second (and third and …) language learning as hierarchical inference that I hope might be of interest to some of you (feedback welcome).

Figure 1: Just as the implicit knowledge about different speakers and groups of speakers (such as dialects or accents) contains hierarchical relations across different language models, the implicit knowledge about multiple languages can be construed as a hierarchical inference process.

We’re building on Read the rest of this entry »

New Paper on Phonetic Variation over Time

Posted on

Max Bane (UChicago, LING), Morgan Sonderegger (UChicago, CS) and I just finished a proceedings paper about our work on phonetic variation. We tracked the VOT distributions of 4 contestants in the reality TV show Big Brother (Season 9, UK) over 3 months and obtained preliminary results showing that social perturbations can potentially explain non-linearities in phonetic parameters over time. Below is a link to our paper. We would be extremely grateful for feedback.

Bane, Max, Peter Graff and Morgan Sonderegger. 2010. Longitudinal Phonetic Variation in a Closed System.

If you would like to hear more about our project, come hear us talk at the LSA in Pittsburgh:

Session 21: Social Factors in Variation
Room: Benedum
Time: Friday, January 7th, 2011 at 9AM

Watch the OCP rock Google’s ngram viewer

Posted on Updated on

Thanks to Zach Warren for pointing me to this cool tool by Google: the ngram viewer.

And just as a demonstration of how cool this is: watch how the OCP rocks that-mention in complement clauses to the verb believe. Of course, we would have to check for other complement clause embedding verbs and for the a priori probability of this vs. that determiner and pronoun uses. And no, that will be my paper. So don’t you dare write it. If you into OCP effects on optional function word use (e.g. because they might be taken to argue for phonological effects on grammatical encoding), see the references below.

And here are some more verbs for those complement clause lovers out there:
Read the rest of this entry »

Results of animacy and accessibility in Yucatec

Posted on Updated on

Good news! We’ve analyzed the previously mentioned experiment on animacy and word order in Yucatec. We coded animacy of the Agent and Patient referents (human, animal, inanimate), transitivity (transitive, intransitive) and voice (active, passive, other) of the verb. We also coded the definiteness of the Agent and Patient referents (definite, indefinite).

Overall, Agent-Verb-Patient word order was strongly preferred (see Table 1). Moreover, human subjects were more likely to appear earlier in the sentence (ps<0.0001, interaction n.s., N=597), which is predicted by direct accessibility accounts. Human agents and patients were were more likely to be described as definite (ps<0.0002), and definite NPs showed a tendency to be mentioned earlier (agent: p<0.0001; patient: n.s., interaction p<0.0001). Still, the effect of animacy held independently (ps<0.002; interaction n.s.). The agent animacy effect was somewhat mediated by an effect on transitivity (whether participants described an event as e.g. an apple hitting a man or an apple falling on a man in that inanimate agents were less often described transitively (p<0.0001; no patient effects). The agent animacy effect remained significant even for transitive sentences (p<0.004; no interaction, N=502). In terms of the effects of voice, human agents correlated with the use of active voice (p<0.0001), and human patients correlated with the use of passive voice, though not at strongly (p<0.03, N=604).

Table: Word order and voice

Agent, Patient and Verb of 531 transitives (excluding 161 non-transitives)

Word order Total Active Passive Other
Agent-Verb-Patient 440 427 7 6
Patient-Verb-Agent 63 2 61 0
Other 28 20 7 1

What does this mean? Good news! Interesting results. In Yucatec, the passive voice is encoded by verbal morphology. Passive voice does not presuppose or preclude a word order change. When a patient was human, sentences were more likely to be in the passive voice. Moreover, human patients were more likely to be mentioned earlier. So, we’ve seen the use of passive voice morphology and earlier mention with human patients.