We hope to see y’all at CUNY in a few weeks. In the interest of hopefully luring to some of our posters, here’s an overview of the work we’ll be presenting. In particular, we invite our reviewers, who so boldly claimed (but did not provide references for the) triviality of our work ;), to visit our posters and help us mere mortals understand.
- Articulation and hyper-articulation
- Unsupervised and supervised learning during speech perception
- Syntactic priming and implicit learning during sentence comprehension
- Uncovering the biases underlying language production through artificial language learning
Interested in more details? Read on. And, as always, I welcome feedback. (to prevent spam, first time posters are moderated; after that your posts will always directly show)
Articulation and hyper-articulation
When does it happen? How targeted can it be? And, how flexible is the system — specifically, do the systems underlying articulation take into account information about the perceived communicative success of previous utterances, so that subsequent utterances are adjusted dynamically as a function of such feedback? This would be in line with influential perspectives in the tradition of research on articulation [e.g., Guenther et al., 1998] and some ideas in phonetics [Lindblom, 1990], but contrary to what many psycholinguistic accounts of language production would suggest [e.g., Arnold, 2010; Bard et al., 2000; see also the rather static notion of listener-generic registers to explain certain effects that seem to suggest audience design, Dell and Brown 1991 and subsequent references to it, in e.g., Ferreira, 2008]). This work builds on a small but rapidly growing body of research by Baese-Berk and Goldrick (2009), Goldrick et al. (2013), Kirov and Wilson (2012), Roche et al. (2010), Schertz (2013), and Stent et al., (2008). Please come, discuss, disagree (but please cite your sources, unlike the reviewers), and make suggestions:
- Buz, E., Tanenhaus, M. K., and Jaeger, T. F. 2015. Production is sensitive to perceived communicative success. The 28th CUNY Sentence Processing Conference. USC, CA, March 19th-21st.
- Seyfarth, S., Buz, E., and Jaeger, T. F. 2015. Talkers selectively shorten vowel productions to enhance relevant contrasts. The 28th CUNY Sentence Processing Conference. USC, CA, March 19th-21st.
Unsupervised and supervised adaptation in speech perception
Do learners take advantage of the information provided by labeling information when adapting to unexpected pronunciations (e.g., if an auditorily ambiguous sounds is embedded in a stimulus that is only a word of English under one of its interpretation [cf. shigarette, as used in Kraljic and Samuel’s work])? We know that such top-down information does get integrated at some point during processing, but couldn’t find direct tests of whether this information can also affect learning. While there are plenty of studies on supervised adaptation (e.g., Norris et al., 2003; Eisner and McQueen, 2005; Kraljic and Samuel, 2006, 2007, etc.; Vroomen et al., 2007) and some studies on unsupervised adaptation (Clayards et al., 2008, C. Munson, 2011), we couldn’t find studies that directly compare both. To our surprise, our studies so far have not been able to detect an advantage of labeling information. There are two interpretations of this finding: Either information from top-down labels is not considered during learning (i.e., some form of encapsulation, contrary to what one would a priori expect under ideal adaptor accounts, Kleinschmidt and Jaeger, 2015) or even unsupervised learning is generally so rapid (thanks to strong priors about the expected variability and when there is a need to learn/adapt) that our paradigms could not detect the advantage of supervision (i.e., there was a ceiling effect). Your feedback is appreciated!
- Kleinschmidt, D., Raizada, R., and Jaeger, T. F. 2015. Informativity in adaptation: Supervised and unsupervised learning of linguistic cue distributions. The 28th CUNY Sentence Processing Conference. USC, CA, March 19th-21st.
Syntactic priming and implicit learning during sentence comprehension
Recent work has begun to link syntactic priming to the adaptation of syntactic expectations (Farmer et al., 2014; Fine et al., 2013; Fine 2013; Kaschak and Glenberg 2004). The former has typically been studied by looking at the trial-by-trial effects, which are assumed to be short-lived (for a notable exception, see Kaschak’s work). The latter explicitly focuses on the cumulative effect of exposure to multiple sentences (Fine et al., 2010, Kamide, 2012, Wells et al., 2009), with recent proposals proposing that these changes in expectations are driven by the goal of efficient language understanding, which requires subjective probabilistic beliefs that are sufficiently closely approximating the actual statistics of the linguistic signal (e.g., Fine et al., 2013). This year, we are presenting follow-up studies that seek to test predictions of this idea (some fulfilled, some not) or extend it to the processing of dialectal syntactic variants (following up on and extending Kaschak and Glenberg, 2004).
- Craycraft, N. and Jaeger, T. F. 2015. Adapting syntactic expectations in the face of changing cue informativity. The 28th CUNY Sentence Processing Conference. USC, CA, March 19th-21st.
- Fine, A. B. and Jaeger, T. F. 2015. The role of verb repetition in cumulative syntactic priming. The 28th CUNY Sentence Processing Conference. USC, CA, March 19th-21st.
- Fraundorf, S. and Jaeger, T. F. 2015. Experience with Dialectal Variants Modulates Online Syntactic Comprehension. The 28th CUNY Sentence Processing Conference. USC, CA, March 19th-21st.
Uncovering the biases underlying language production through artificial language learning
This work aims to take advantage of artificial language learning as a tool to study the (perhaps universal?) biases underlying that language production system. Specifically, we design artificial languages that differ form the L1 of our monolingual test participants. For example, what happens when an English speaker learns a miniature language that is verb-final? Will they reverse they normal preference to order short before long constituents and now exhibit a long-before-short preference (as has been observed for speakers of head-final languages, as Korean and Japanese, e.g., Yamashita and Chang, 2001, 2006). Contrary to reviewers’ claims, this prediction (made by Hawkins 1994, 2004; see also Yamashita and Chang, 2006; Yamashita and Kondo, 2009) has not gained traction in psycholinguistic literature, which keeps referring to a universal short-before-long order (presumably at least partly because it goes well with the idea of availability-based production, Ferreira, 2008; MacDonald, 2013, etc.). Yet, our studies indeed find evidence that English learners’ preference switches from short-before-long to long-before-short when they learn a verb-final language! Why the relevance of this for language production is not recognized by reviewers was quite frankly baffling to me. I am looking forward to hearing why this is a trivial finding.
- Fedzechkina, M., Jaeger, T. F., and Trueswell, J. 2015. Producing informative cues early: Evidence from a miniature artificial language. The 28th CUNY Sentence Processing Conference. USC, CA, March 19th-21st.
- Fedzechkina, M. and Jaeger, T. F. 2015. ‘Long before short’ preference in a head-final artificial language: In support of dependency minimization accounts. The 28th CUNY Sentence Processing Conference. USC, CA, March 19th-21st.