Papers, Presentations, etc.

Using plyr to get intimate with your data

Posted on Updated on

I gave a short tutorial [pdf slides] at the LSA summer institute on one of my favorite R packages: plyr (another brilliant Hadley Wickham creation). This package provides a set of very nice and semantically clean functions for exploring and manipulating data. The basic process that these functions carry out is to split data up in some way, do something to each piece, and then combine the results from each piece back together again.

One of the most common tasks that I use this for is to do some analysis to data from each subject in an experiment, and collect the results in a data frame. For instance, to calculate the mean and variance of each subject’s reaction time, you could use:

ddply(my.data, "subject.number", function(d) {
  return(data.frame(mean.RT=mean(d$RT), var.RT=mean(d$RT)))
})

Plyr also provides a whole host of convenience functions. For instance, you could accomplish the same thing using a one-liner:

ddply(my.data, "subject.number", summarise, mean.RT=mean(RT), var.RT=var(RT))

There are lots more examples (as well as more background on functional programming in general and the other use cases for plyr) in the slides [pdf] (knitr source is here, too).

Optional plural-marking in Yucatec

Posted on

After many years of data collection, translation, annotation, and analysis, our first longer paper based on our field-based studies of language production in Yucatec Maya is about to appear in print:

The paper discusses speakers’ production preferences in optional plural-marking on nouns and verbs in Yucatec. Yucatec plural-marking is typologically interesting in that there seem to be environments in which plural-marking on either or even both the noun and verb can be omitted without loss of plural meaning.

Lindsay Butler’s thesis (at the University of Arizona) focused on the syntactic theory behind optional plural-marking in Yucatec and what it tells us about the typology of plural-marking. If you’re interested in this topic, you might also find the following article of interest, which provides a broader introduction to the relevant grammatical constraints in Yucatec plural-marking. I will add links soon, but in the meantime feel to ask for a copy:

  • Bohnemeyer, J. B., Butler, L. K., and Jaeger, T. F. submitted. Head-marking and agreement: Evidence from Yucatec Maya.

Syntactic expectation adaptation (update your beliefs!)

Posted on Updated on

At long last, Alex Fine‘s paper on syntactic adaptation expectation is about to appear in PLOS One. You can download the pre-proof from our academia.edu page (the final version will be linked there as soon as it’s available):

  1. Fine, A. B., Jaeger, T. F., Farmer, T. , and Qian, T. 2013. Rapid expectation adaptation during syntactic comprehensionPLoS One.

The paper presents a novel framework that ties together syntactic comprehension and implicit learning. We tie together work on expectation-based sentence understanding, syntactic priming in comprehension, statistical learning, and speaker-specificity in syntactic comprehension.In two self-paced reading studies, we show that readers rapidly adjust their expectations for specific syntactic structures to converge on the statistics of the current environment. They do so based on both previous experience and recent experience within the experiment. Read the rest of this entry »

Running phonetic (adaptation) experiments online

Posted on Updated on

I’ve developed some JavaScript code that somewhat simplifies running experiments online (over, e.g., Amazon’s Mechanical Turk). There’s a working demo, and you can download or fork the source code to tinker with yourself. The code for the core functionality which controls stimulus display, response collection, etc. is also available in its own repository if you just want to build around that.

If you notice a bug, or have a feature request, open an issue on the issue tracker (preferred), or comment here with questions and ideas. And, of course, if you want to contribute, please go ahead and submit a pull request. Everything’s written in HTML, CSS, and JavaScript (+JQuery) and aims to be as extensible as possible. Happy hacking!

If you find this code useful for your purposes, please refer others to this page. If you’d like to cite something to acknowledge this code or your own code based on this code, the following is the paper in which we first used this paradigm:

  1. Kleinschmidt, D. F., and Jaeger, T. F. 2012. A continuum of phonetic adaptation: Evaluating an incremental belief-updating model of recalibration and selective adaptation. Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci12), 605-610. Austin, TX: Cognitive Science Society.

A more detailed journal paper is currently under review. If you’re interested, subscribe to this post and get the update when we post the paper here once it’s out (or contact me if you can’t wait).

Our Special Issue is coming out: Parsimony and Redundancy in Models of Language

Posted on Updated on

It’s almost done! After about two years of work, our Special Issue on Parsimony and Redundancy in Models of Language (Wiechmann, Kerz, Snider & Jaeger 2013)  is about to come out in Language and Speech, Vol 56(3). The brunt of the editorial work in putting this together was mastered by Daniel Wiechman, who just started his new position at the University of Amsterdam, and Elma Kerz, in the Department of Anglistik at the University of Aachen.

Cover of Special Issue on Parsimony and Redundancy in Models of Language (in Language and Speech)
Cover of Special Issue on Parsimony and Redundancy in Models of Language (in Language and Speech)

I am excited about this Special Issue, which –I think– brings together a variety of positions on representational redundancy and parsimony in linguistic theory building as well as the role of redundancy in the development of language over time. Some contributions discuss different computational and representational architectures, other contributions test these theories or investigate specific assumptions about the nature of linguistic representations. Read the rest of this entry »

Erratum: Jaeger and Snider (2013) wrongly summarizes one (non-critical) aspect of the results of Bernolet and Hartsuiker (2010)

Posted on

In Jaeger and Snider (2013) we wrongly summarized one aspect of the experiments conducted by Bernolet and Harsuiker (2010) on syntactic priming in the Dutch ditransitive alternation. This does not affect the validity of argument, but should nevertheless be noted.

On p. 71-72, we wrote:

… Bernolet and Hartsuiker investigate the effect of prime surprisal in the Dutch dative alternation. They find stronger priming for more surprising DO primes, but no such effect for PO primes. As a matter of fact, Bernolet and Hartsuiker do not observe any priming for PO primes. 

While the first statement about their experiment is correct, the second statement is wrong. Although Bernolet and Hartsuiker observed stronger priming effects for DOs than POs in Dutch, the effects reached significance for both DO and PO primes. This result still goes with the point we were making (that it’s harder to detect error-sensitivity of syntactic priming for structures that exhibit only small syntactic priming effects to begin with). We are sorry for this mistake and appreciate that Sarah (Bernolet) caught it!

References

  1. Jaeger, T. F. and Snider, N. (2013). Alignment as a consequence of expectation adaptation: syntactic priming is affected by the prime’s prediction error given both prior and recent experience. Cognition 127(1), 57–83. doi:10.1016/j.cognition.2012.10.013.
  2. Bernolet, S., & Hartsuiker, R. J. (2010). Does verb bias modulate syntactic priming ? Cognition, 114, 455–461. doi:10.1016/j.cognition.2009.11.005

now, this is broader impact

Posted on Updated on

This is federal funds well-spent: after the CUNY Sentence Processing Conference, Daniel Pontillo reaches out to the broader public and explains –to a captive audience of night owls at packed IHOP– how eye-tracking data allows us to test how we process the world (the poster is on implicit naming, or rather the lack thereof, in visual world experiments). The presentation was a resounding success. One member of an underrepresented minority was likely recruited for a research career in the cognitive sciences. A brawl that later ensued on the same premises stands in no relation to this presentation, in which only waffles were harmed. Science never stops. We are grateful for all feedback received from IHOPers during the poster presentation.

(Disclaimer: federal funds were only used to print the poster, which was first presented at the Sentence Processing Conference.)

Dan Pontillo gives an impromptu poster presentation at the IHOP around 2-something a.m., Columbia, S.C.

Academic minute about Masha’s work

Posted on Updated on

An academic minute about Masha Fedzechkina’s work on the existence of a bias for efficient information transfer during language acquisition just came out — you can listen to it here.

The work described in the minute appeared as:

Fedzechkina, M., Jaeger, T.F. & Newport, E. (2012). Language learners restructure their input to facilitate efficient communication. Proceedings of the National Academy of Sciences, 109 (44): 17897—17902.
[paper (DOI)] [BibTeX]

Unfortunately, they did not mention the team members (contrary to what I was assured). The lead author, Masha Fedzechkina, is a fourth year graduate student in the Brain and Cognitive Sciences at the University of Rochester. The work was jointly advised by Elissa Newport (the director of the new Center for Brain Plasticity and Recovery at Georgetown University) and me.

Special Issue on “Laboratory in the Field: Advances in cross-linguistic psycholinguistics”

Posted on Updated on

Aim

We invite original and unpublished papers on psycholinguistic research on lesser-studied languages, for a special issue of Language and Cognitive Processes. Our purpose is to bring together researchers who are currently engaged in empirical research on language processing in typologically diverse languages, in order to establish the emerging field of cross-linguistic psycholinguistics as a cross-disciplinary research program. Both submissions that extend the empirical coverage of psycholinguistic theories (e.g., test whether supposedly universal processing mechanisms hold cross-linguistically) and submissions that revise and extend psycholinguistic and linguistic theory through quantitative data are welcome. The special issue will focus on the architecture and mechanisms underlying language processing (both comprehension and production) at the lexical and sentence level. This includes studies on phonological and morphological processing to the extent that they speak to the organization, representation, and processing of lexical units or the interaction of these processes with sentence processing. We seek behavioral, neurocognitive (e.g., ERP, fMRI), and quantitative corpus studies in any of these areas.

Read the rest of this entry »

Effects of phonological overlap on fluency, speech rate, and word order in unscripted sentence production

Posted on Updated on

The last two papers based on Katrina Furth’s and Caitie Hilliard’s work back when they were at Rochester just came out in the Journal of Experimental Psychology: Learning, Memory, and Cognition and the journal Frontiers in Psychology.

The JEP:LMC paper investigates how lemma selection (i.e., word choice) is affected by phonological overlap. We find evidence for a (weak) bias against sequences of phonologically onset overlapping words. That is, when speakers have a choice, they seem to prefer sentences like “Hannah gave the hammer to the boy”, rather than “Hannah handed the hammer to the boy”. This suggests very early effects of phonology on lexical production, which seem to be incompatible with strictly serial models of word production.

Jaeger, T. F., Furth, K., and Hilliard, C. 2012. Phonological overlap affects lexical selection during sentence production. Journal of Experimental Psychology: Learning, Memory, and Cognition, 38(5), 1439-1449. [doi: 10.1037/a0027862]

The Frontiers paper investigates Read the rest of this entry »

Language is shaped by brain’s desire for clarity and ease

Posted on Updated on

Congratulations to Masha Fedzechkina on her article on a bias for efficient information transfer during language learning that has just appeared in the Proceedings of the National Academy of Sciences (link to article).

Here’s some news coverage

More to come soon.

Errata: We are sorry that in our paper we forgot to acknowledge the help of three undergraduate research assistants, Andy Wood, Irene Minkina, and Cassandra Donatelli, in preparing the video animations used during our artificial language learning task.

HLP lab will be at the LSA 2013 summer institute

Posted on

Come join us in Ann Arbor, MI for the 2013 Summer Institute of the Linguistic Society of America. You can follow the institute on facebook.

Victor Ferreira and I will be organizing a workshop on How the brain accommodates variability in linguistic representations (more on that soonish). I will be teaching a class on regression and mixed models and I am sure a bunch of other folks from the lab will be there, too.

 

 

New R library for multilevel modeling

Posted on

This might be of interest to many of you. MLwiN, a software package for multilevel modeling developed at Bristol that includes functions beyond those present in, e.g., lmer, now has an interface for R (kinda like WinBugs, etc.), so that you can continue to use R while taking advantage of the powerful tools in MLwiN. The package is called R2MLwiN. For more details, see below.

Dear all,
We are pleased to announce a new R package, R2MLwiN (Zhang et al. 2012)
that allows R users access to the functionality within MLwiN directly from
within the R package. This package has been developed as part of the e-STAT
ESRC digital social research programme grant along with the Stat-JR package.
See <http://www.bristol.ac.uk/cmm/software/r2mlwin/> for more details
including examples taken from the book MCMC Estimation in MLwiN.

Feedback gratefully received by either me or Zhengzheng Zhang (Z.Zhang@bristol.ac.uk).

Best wishes,
  Bill Browne. 

Two new HLP lab papers and some thoughts on implicit learning

Posted on

I am happy to report on two new HLP lab papers on implicit learning in language and beyond that recently were accepted for publication:

The first paper by Ting Qian is an opinion piece on learning and theories of learning in a world in which evidence is presented sequentially and where deviations from the expected always carry with them ambiguity about the cause of such deviation. So, how do learners figure out how to construct sufficiently adequate (i.e. good in coverage, though not necessarily accurate in terms of assumptions about the causes) causal theories of the world?

Read the rest of this entry »

congratulations to Ting Qian and Dave Kleinschmidt

Posted on Updated on

Congratulations to Ting Qian and Dave Kleinschmidt, both students in the Brain and Cognitive Science Program at Rochester and members of HLP Lab, for being awarded a Google Travel Grant to CogSci 2012 in Sapporo, Japan, where they will present their work. which centers around implicit statistical learning and adaptation during language acquisition and processing:

And, thank you, dear Google.