Special session at the LSA meeting in da’Burgh

Posted on Updated on


You are ever so cordially invited to attend the following awesome-to-be workshop at the LSA 2011:

Empirically Examining Parsimony and Redundancy

in Usage-Based Models

Organized Session at 2011 Linguistic Society of America Annual Meeting

Schedule

Please see http://www.lsadc.org/info/preliminary-program-2011.cfm#saturday-afternoon (#50)

Main Session:
When: Saturday, 1/08, 2-3:30pm (1.5 jam-packed hours of mindless fun)
Where: Grand Ballroom 4, Wyndham Grand Pittsburgh Downtown Hotel, Pittsburgh, PA

Poster Session
When: Sunday, 1/09, 9-12am (the journey continues)
Where: Grand Ballroom Foyer, Wyndham Grand Pittsburgh Downtown Hotel, Pittsburgh, PA

Participants

R. Harald Baayen (University of Alberta)
Joan Bresnan (Stanford University)
Walter Daelemans (University of Antwerp)
Bruce Derwing (University of Alberta)
Daniel Gildea (University of Rochester)
Matthew Goldrick (Northwestern University)
Peter Hendrix (University of Alberta)
Gerard Kempen (Max Planck Institute)
Victor Kuperman (McMaster University)
Yongeun Lee (Chung Ang University)
Gary Libben (University of Calgary)
Marco Marelli (University of Alberta)
Petar Milin (University of Alberta)
Timothy John O’Donnell (Harvard University)
Gabriel Recchia (Indiana University)
Antoine Tremblay (IWK Health Center)
Benjamin V. Tucker (University of Alberta)
Antal van den Bosch (Tilburg University/University of Antwerp)
Christ Westbury (University of Alberta)

Organizers

Neal Snider (Nuance Communications, Inc.)
Daniel Wiechmann (Friedrich-Schiller-Universität Jena)
Elma Kerz (RWTH-Universität Aachen)
T. Florian Jaeger (University of Rochester)

Description

Recent years have seen a growing interest in usage-based (UB) theories of language, which assume that language use plays a causal role in the development of linguistic systems over historical time. A central assumption of the UB-framework is the idea that shapes of grammars are closely connected to principles of human cognitive processing (Bybee 2006, Givon 1991, Hawkins 2004). UB-accounts strongly gravitate towards sign- or construction-based theories of language, viz. theories that are committed to the belief that linguistic knowledge is best conceived of as an assembly of symbolic structures (e.g. Goldberg 2006, Langacker 2008, Sag et al. 2003). These constructionist accounts share (1) the postulation of a single representational format of all linguistic knowledge and (2) claim a strong commitment to psychological plausibility of mechanisms for the learning, storage, and retrieval of linguistic units. They do, however, exhibit a considerable degree of variation with respect to their architectural and mechanistic details (cf. Croft & Cruse 2004).

A key issue is the balancing of storage parsimony and processing parsimony: Maximizing storage parsimony is taken to imply greater computational demand and vice versa. The space of logical possibilities ranges from a complete inheritance model (minimal storage redundancy) to a full-entry model (maximal storage redundancy). Currently, the empirical validation of the theoretical situation is not yet conclusive: the representations involved in language processing involve extremely fine-grained lexical-structural co-occurrences, for example frequent four-word phrases are processed faster than infrequent ones (Bannard and Matthews 2008, Arnon and Snider 2010). On the other hand, syntactic exemplar models (Bod 2006) have been argued to overfit and undergeneralize compared to models that do not store all structures in the training data (cf. Post and Gildea 2009, although they found that Tree Substitution Grammar representations induced in a Bayesian framework still split the parsimony continuum towards greater redundancy). Also, experimental work has argued that models of categorization that directly map phonetic dimensions to phonological categories (and therefore more directly reflect the statistics of the training data) do not predict human behavior as well as models that assume independent, intermediate representations (Toscano and McMurray 2010). Additionally, recent work has provided evidence that early evidence for full-entry models from item-based learning in acquisition (e.g. Pine & Lieven 1997) is confounded, reopening this line of research as well (Yang, unpublished manuscript

This workshop will bring together linguists, psycholinguists, and computational linguists that commit to a UB-framework to discuss which methodologies can best shed light on questions pertaining to the representational nature of constructions and the mechanisms involved in their on-line processing.

Abstracts:

Short (4pp) papers are now available at: http://www.hlp.rochester.edu/lsa2011/ (towards the bottom of the page). We have received interesting offers for a special issue. If you have an awesome one, please contact Neal Snider.

Implicit schemata and categories in memory-based language processing
Antal van den Bosch (Tilburg University/University of Antwerp)
Walter Daelemans (University of Antwerp)

Memory-based language processing (MBLP) is an approach to language processing based on exemplar storage during learning and analogical reasoning during processing (Daelemans & Van den Bosch 2005, 2010). From a cognitive perspective, the approach is attractive because it does not make any assumptions about the way abstractions are shaped, and does not make any a priori distinction between regular and exceptional exemplars, allowing it to explain fluidity of linguistic categories, and irregularization as well as regularization in processing. Schema-like behavior and the emergence of categories can be explained in MBLP as by-products of analogical reasoning over exemplars in memory. Using prepositional phrase attachment and prosodic boundary and accent placement as case studies, we show how abstractions arise in a memory-based framework. We critically discuss the differences between the MBLP approach and other frameworks that do assume some systemic form of abstraction (e.g. prominence hierarchies in syntactic tree fragments).

Sampled representations for tree substitution grammar
Daniel Gildea (University of Rochester)
Matt Post (University of Rochester)

Tree substitution grammars (TSGs) model syntax with a collection of tree fragments of arbitrary shape and size; larger tree fragments allow the encoding of longer-distance dependencies but also result in larger grammars and less ability to generalize. We study methods for extracting tree substitution grammars from syntactically parsed data, evaluating the resulting grammars primarily by their accuracy in parsing new sentences. By treating the decomposition of the parsed data into TSG rules as a hidden variable, we can give more weight to frequently occurring subtrees that are likely to be grammatically significant. By using a Bayesian framework, we achieve a robust trade-off between capturing long distance dependencies and generalization to new sentences. By experimenting with sampling techniques for finding significant subtrees, we are achieve both high parsing accuracy and a compact grammar.

Productivity and reuse in language: A Bayesian framework
Timothy John O’Donnell (Harvard University)

We present a computational framework for the mirror image problems of linguistic productivity and storage. The framework treats the problem of determining which structures should be composed on the fly, and which should be retrieved from memory as an optimal Bayesian inference. The model is evaluated by comparing its performance to competing frameworks including full-parsing, full-listing, and exemplar-based models. The model gives an accurate account of the adult representations and developmental trajectory of the English past tense. The model demonstrates defaultness, blocking, and “elsewhere” behavior as consequences of Bayesian inference. We apply the model to English derivational morphology. The model predicts the differential productivity of English suffixes, placing them on a cline from very productive (+ness) to very unproductive (+th). We discuss the relationship with various empirical measures of productivity. We also show how the model partially accounts for ordering restrictions between affixes.

Empirical evidence for an inflationist lexicon
Antoine Tremblay (IWK Health Center, Halifax, Canada)
Gary Libben (University of Calgary)
Bruce Derwing (University of Alberta)
Chris Westbury (University of Alberta)
Benjamin V. Tucker (University of Alberta)

Although generative and construction grammars both assume that some linguistic forms are stored/retrieved as wholes while others are strung together from simpler parts, they differ with respect to the units that may be stored in the lexicon, and therefore in the rules that combine them. Within the generative framework, the determining factor for storage is regularity. In contrast, for construction grammarians frequency of use is an important factor determining whether a form is stored as a whole or (de)composed on-line. We present empirical evidence from self-paced reading, sentence recall, and chunk production experiments showing that speakers are sensitive to the frequency of use of regular, non-idiomatic multi-word sequences (“at the end of”; MWS), thus suggesting that they are stored/retrieved as wholes (favoring the constructionist view). However, these frequency effects could rather reflect speeded/practiced rule-based (de)composition. Results from our chunk recall experiment with event-related potential recordings suggest that (some aspects) of such MWSs are holistically stored/retrieved.

The role of abstraction in constructing phonological structure
Yongeun Lee (Chung Ang University)
Matthew Goldrick (Northwestern University)

While English groups together vowels and codas into a unit (‘rime’) distinct from the onset (Kessler & Treiman, 1997), Korean (Yoon & Derwing, 2001) groups onsets and vowels into a unit (‘body’) distinct from the coda. We argue that these distinct representational structures are constructed using distributions over abstract phonological forms.

Lee & Goldrick (2008) showed that segments in underlying forms in English and Korean have contrasting distributional patterns that cue contrasting sub-syllabic representations. Analysis of short-term memory errors revealed that speakers utilize this statistical information during language processing.

Although distributions over underlying forms provide clear cues to sub-syllabic structure, distinctions between such forms are frequently neutralized in Korean surface forms (Kim & Jongman, 1996). In novel analyses, we show that the distributional structure of Korean surface forms fails to provide robust cues to sub-syllabic structure. Constructing phonological structure requires sensitivity to distributional information over abstract representations.

The unproven psychological reality of grammatical movement and gap-filling: Competitive optimization of functional and linear dependencies satisfies the psycholinguistic evidence
Gerard Kempen (Max Planck Institute for Psycholinguistics, Nijmegen/Cognitive Psychology Unit, Leiden University)

Since half a century, many psycholinguists have sought evidence for the psychological reality of movement transformations and gap-filling. The studies cover a wide range of grammatical movement phenomena (A- and A-bar movement, head movement, scrambling), and deploy virtually the entire arsenal of experimental paradigms in sentence production and comprehension.

Based on a summary of the results of this work, I argue that the data are perfectly compatible with nontransformational treatments of grammatical movement where ‘moved’ constituents are base-generated in their noncanonical surface positions (GPSG, HPSG, LFG, PG). The resulting trees do not include empty terminal nodes (‘gaps’). Instead, constituents in noncanonical position are annotated with features coding for functional and positional relations with other constituents, in particular for relations between dependents and governors (subcategorizers).

In order to account for the behavioral and neurophysiological data, I assume an IAC-type processor (Interactive Activation and Competition) with grammatical constituents viewed as active units that continually compete with each other for optimal dependency and linear order relationships.

Sidestepping the combinatorial explosion: Towards a processing model based on discriminative learning
Harald Baayen (University of Alberta)
Marco Marelli (University of Alberta)
Peter Hendrix (University of Alberta)
Petar Milin (University of Alberta)

We present a new symbolic computational model for word reading based on principles of discriminative learning. For English, the model is trained on short n-grams (in all 26 million words) of the British National Corpus, and

maps orthographic input (letters and letter bigrams) onto the meaning representations of a mere 6700 morphemes. Without having to posit paradigms (structured lists of inflectionally related exemplars), morphological families (sets of morphologically related complex words) or representations for complex words and phrases, the model correctly predicts (i) syntactic paradigmatic entropy effects in English (and similarly, for Serbian, morphological paradigmatic entropy effects), (ii) frequency effects for complex words and phrases, (iii) morphological family size effects, and various other effects reported in the morphological processing literature. We propose this model as part of a larger hierarchically structured usage-based constructionist memory for language.

Incremental production of the English dative constructions
Victor Kuperman (McMaster University)
Joan Bresnan (Stanford University)
Gabriel Recchia (Indiana University)

Production of alternating syntactic constructions (ditransitive dative “gave him the book” vs prepositional “gave the book to him”) is influenced by probabilities of alternatives and by accessibility of construction arguments. Yet probabilities themselves are accurately estimated as a function of accessibility indices (Bresnan et al., 2007; Roland et al., 2008). We probe the explanatory power of probabilistic measures (advocated by information-theoretical accounts: Jaeger, 2010) above the influence of non-probabilistic accessibility (espoused by availability-based accounts: Ferreira & Dell, 2000; Solomon & Pearlmutter, 2004) on spontaneous production of English datives (Tily et al., 2009; Wagner Cook et al., 2009).

Analyses of acoustic durations of all syntactic arguments revealed pervasive effects of accessibility indices in both types of datives; no reliable effects of construction probability in ditransitive datives; and independent probabilistic effects in prepositional datives. This implies that, despite their conceptual compatibility, neither probabilistic nor availability-based accounts of syntactic production are theoretically redundant.

Please note that there is a poster session on Sunday, 1/09

Advertisements

Questions? Thoughts?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s