artificial language learning
At long last! It’s my great pleasure to announce the publication of the special issue on “Laboratory in the field: advances in cross-linguistic psycholinguistics”, edited by Alice Harris (UMass), Elisabeth Norcliffe (MPI, Nijmegen), and me (Rochester), in Language, Cognition, and Neuroscience. It is an exciting collection of cross-linguistic studies on language production and comprehension and it feels great to see the proofs for the whole shiny issue:
We hope to see y’all at CUNY in a few weeks. In the interest of hopefully luring to some of our posters, here’s an overview of the work we’ll be presenting. In particular, we invite our reviewers, who so boldly claimed (but did not provide references for the) triviality of our work ;), to visit our posters and help us mere mortals understand.
- Articulation and hyper-articulation
- Unsupervised and supervised learning during speech perception
- Syntactic priming and implicit learning during sentence comprehension
- Uncovering the biases underlying language production through artificial language learning
Interested in more details? Read on. And, as always, I welcome feedback. (to prevent spam, first time posters are moderated; after that your posts will always directly show)
The Human Language Processing (HLP/Jaeger) Lab in the Department of Brain and Cognitive Sciences at the University of Rochester is looking for PhD researchers to join the lab. Admission is through the PhD program in the Brain and Cognitive Sciences, which offers full five-year scholarship. International applications are welcome.
We presented the results of our artificial language learning study on the use of case-marking and word order as cues in processing and learning at the LSA annual meeting. This is work done with Florian Jaeger and Elissa Newport. We investigated whether functional pressures (e.g., ambiguity reduction) operate during language acquisition, biasing learners to (subtly) deviate from the input they receive. Our results suggest that language learners indeed have a bias to reduce uncertainty (or ambiguity) in the input language: The learners are more likely to fix the word order if a language does not have case. See the image below for the details of the study or download the poster as a pdf here. Feedback welcome!
Update 11/29/11: This work was published in the 2011 CogSci Proceedings as
- Fedzechkina, M., Jaeger, T. F., and Newport, E. L. 2011. Functional Biases in Language Learning: Evidence from Word Order and Case-Marking Interaction. Proceedings of the 33rd Annual Meeting of the Cognitive Science Society (CogSci11), 318-323.
Judith Degen, Masha Fedzechkina and I just came back from Ohio State’s linguistics departments, where we had a great time presenting and discussing our work. Masha gave her first talk ever, presenting her work within the artificial language learning paradigm on functional biases on acquisition (an extension of her LSA poster, soon to be posted here). Judith gave a wonderful guest lecture for Shari Speer’s introduction to psycholinguistics. She talked about scalar implicature and her work with Mike Tanenhaus on this topic. Since even I got it (and I am well-known to be pragmatically challenged), I can highly recommend her slides on scalar implicature processing (beware it’s a monster file – click and go grab a coffee).
Thanks to everyone there for great and insightful conversations and for organizing this. I was particularly excited to hear about potential applications of Uniform Information Density to natural language generation (please keep me posted!). Oh, and extra big thanks to Judith Tonhauser and her fat white cat.