Presentation at CNS symposium on “Prediction, adaptation and plasticity of language processing in the adult brain”
Earlier this week, Dave Kleinschmidt and I gave a presentation as part of a mini-symposium at Cognitive Neuroscience Conference on “Prediction, adaptation and plasticity of language processing in the adult brain” organized by Gina Kuperberg. For this symposium we were tasked to address the following questions:
- What is prediction and why do we predict?
- What is adaptation and why do we adapt?
- How do prediction and adaptation relate?
Although we address these questions in the context of language processing, most of our points are pretty general. We aim to provide intuitions about the notions of distribution, prediction, distributional/statistical learning and adaptation. We walked through examples of belief-updating, intentionally keeping our presentation math-free. Perhaps some of the slides are of interest to some of you, so I attached them below. A more in-depth treatment of these questions is also provided in Kleinschmidt & Jaeger (under review, available on request).
Comments welcome. (sorry – some of the slides look strange after importing them and all the animations got lost but I think they are all readable).
It was great to see these notions discussed and related to ERP, MEG, and fMRI research in the three other presentations of the symposium by Matt Davis, Kara Federmeier and Eddy Wlotko, and Gina Kuperberg. You can read their abstracts following the link to the symposium I included above.
A few days ago, I posted a summary of some recent work on syntactic alignment with Kodi Weatherholtz and Kathryn Campell-Kibler (both at The Ohio State University), in which we used the WAMI interface to collect speech data for research on language production over Amazon’s Mechanical Turk.
The first step in our OSU-Rochester collaboration on socially-mediated syntactic alignment has been submitted a couple of weeks ago. Kodi Weatherholtz in Linguistics at The Ohio State University took the lead in this project together with Kathryn Campbell-Kibler (same department) and me.
We collected spoken picture descriptions via Amazon’s crowdsourcing platform Mechanical Turk to investigate how social attitude towards an interlocutor and conflict management styles affected syntactic priming. Our paradigm combines Read the rest of this entry »
We’ve just submitted a perspective paper on second (and third and …) language learning as hierarchical inference that I hope might be of interest to some of you (feedback welcome).
- Pajak, B., Fine, A.B., Kleinschmidt, D., and Jaeger, T.F. submitted. Learning additional languages as hierarchical probabilistic inference: insights from L1 processing. submitted for review to Language Learning.
We’re building on Read the rest of this entry »
And just as a demonstration of how cool this is: watch how the OCP rocks that-mention in complement clauses to the verb believe. Of course, we would have to check for other complement clause embedding verbs and for the a priori probability of this vs. that determiner and pronoun uses. And no, that will be my paper. So don’t you dare write it. If you into OCP effects on optional function word use (e.g. because they might be taken to argue for phonological effects on grammatical encoding), see the references below.
And here are some more verbs for those complement clause lovers out there:
Read the rest of this entry »
Good news! We’ve analyzed the previously mentioned experiment on animacy and word order in Yucatec. We coded animacy of the Agent and Patient referents (human, animal, inanimate), transitivity (transitive, intransitive) and voice (active, passive, other) of the verb. We also coded the definiteness of the Agent and Patient referents (definite, indefinite).
Overall, Agent-Verb-Patient word order was strongly preferred (see Table 1). Moreover, human subjects were more likely to appear earlier in the sentence (ps<0.0001, interaction n.s., N=597), which is predicted by direct accessibility accounts. Human agents and patients were were more likely to be described as definite (ps<0.0002), and definite NPs showed a tendency to be mentioned earlier (agent: p<0.0001; patient: n.s., interaction p<0.0001). Still, the effect of animacy held independently (ps<0.002; interaction n.s.). The agent animacy effect was somewhat mediated by an effect on transitivity (whether participants described an event as e.g. an apple hitting a man or an apple falling on a man in that inanimate agents were less often described transitively (p<0.0001; no patient effects). The agent animacy effect remained significant even for transitive sentences (p<0.004; no interaction, N=502). In terms of the effects of voice, human agents correlated with the use of active voice (p<0.0001), and human patients correlated with the use of passive voice, though not at strongly (p<0.03, N=604).
What does this mean? Good news! Interesting results. In Yucatec, the passive voice is encoded by verbal morphology. Passive voice does not presuppose or preclude a word order change. When a patient was human, sentences were more likely to be in the passive voice. Moreover, human patients were more likely to be mentioned earlier. So, we’ve seen the use of passive voice morphology and earlier mention with human patients.