LSA 2011 at Boulder: Yeah!

Posted on


Woohooo. Roger Levy and I will be teaching a class on Computational Psycholinguistics at the 2011 LSA’s Linguistics Institute to be held July 5th- August 5th next year in Boulder, CO. The class description should be available through their website soon, but here are some snippets from our proposal:

Course Motivation and Description: Over the last two decades, cognitive science has undergone a paradigm shift towards probabilistic models of the brain and cognition. Many aspect of human cognition are now understood in terms of rational use of available information in the light of uncertainty (Anderson 1990, e.g. models in memory, categorization, generalization and concept learning, visual inference, motor planning). Building on a long traditional of computational models for language, such rational models have also been proposed for language processing and acquisition (see Jaeger, in press, Jurafsky, 2003, Levy, 2008 for overviews). This class provides an overview to the newly emerging field of computational psycholinguistics, which combines insights and methods from linguistic theory, natural language processing, machine learning, psycholinguistics, and cognitive science into the study of how we understand and produce language. There has been a surge in work in this area, which is attracting scholars from many disciplines. The goal of this class is to provide students with enough background to start their own research in computational psycholinguistics.

We draw on probability theory and information theory to provide a formal framework for the study of language processing. Probability theory play a central role in addressing uncertainty and resource allocation in comprehension. Probability theory also is essential for the formal definition of information.

In comprehension, language users must resolve uncertainty in real time over a potentially unbounded set of possible signals and meanings in order to draw accurate conclusions about the intended meaning of an utterance. How can a fixed set of knowledge and resources be deployed to manage this uncertainty, and what consequences do this deployment have for online language comprehension?  We draw on some of the most exciting developments in contemporary cognitive science to characterize this problem as one crucially involving probabilistic inference and decision theory.  In this class, we cover computational models using probabilistic grammars which make this characterization explicit, as well as a range of related empirical results, including online and offline syntactic ambiguity resolution; the precise nature of the relationship between reaction times and expectations for upcoming words and grammatical events.

In production, speakers are faced with squeezing their intended message through a sequential interface (speech) in an efficient and yet robust way. We draw on insights from information theory, such as the noisy channel theorem, which tells us what speakers should do if language production is to optimize communication. Information theory also provide a precise way to quantify what is meant by information in terms of probability. Methods from natural language processing provide ways to estimate information. This makes an intriguing prediction, that human language use is organized to be efficient, testable. We summarize findings that bear on this question from all levels of linguistic processing, ranging from subphonemic detail (center of gravity of vowels, or VOT of consonants) via morphosyntactic variation (e.g. auxiliary contraction or object drop) to the organization of discourse beyond the clause (see syllabus for more detail).

The findings from language use (comprehension and production) we discuss in class have consequences for linguistic theory. Functionalist theories have long maintained that grammar is at least partially shaped by language use (see also construction grammar, exemplar-based models, usage-based accounts). The computational theories discussed in class provide a mathematical framework in which such notions as processing complexity and communicative success can be defined and theoretically optimal solutions can be derived and then compared to actually observed human behavior. We conclude our class with a discussion of how computational theories of language comprehension and production can be used to investigate to what extent language (or specifically the observed distribution of grammar across languages of the world) is shaped to provide an optimal solution to communication. In this context, we also discuss recent research on the plasticity of linguistic representations in adults that has shown that relatively little exposure can at least change how we process language, possibly even how it is represented (see syllabus for more detail).

Target Audience: The course should attract interest from students interested in computational linguistics, psycholinguistics, functional linguistics, and quantitative/probabilistic models of linguistic knowledge. Rather than focusing only on the mechanisms of language processing, the computational theories we discuss in the class link language processing to communication and language use. This provides a formal framework to discuss the consequences of communicative pressures for both processing and language structure.

So, please join us at the LSA next year ;).

Advertisements

Questions? Thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s