We hope to see y’all at CUNY in a few weeks. In the interest of hopefully luring to some of our posters, here’s an overview of the work we’ll be presenting. In particular, we invite our reviewers, who so boldly claimed (but did not provide references for the) triviality of our work ;), to visit our posters and help us mere mortals understand.
- Articulation and hyper-articulation
- Unsupervised and supervised learning during speech perception
- Syntactic priming and implicit learning during sentence comprehension
- Uncovering the biases underlying language production through artificial language learning
Interested in more details? Read on. And, as always, I welcome feedback. (to prevent spam, first time posters are moderated; after that your posts will always directly show)
The summer conference season is coming up and HLP Lab, friends, and collaborators will be presenting their work at CMCL (Baltimore, joint with ACL), ACL (Baltimore), CogSci (Quebec City), and IWOLP (Geneva). I wanted to take this opportunity to give an update on some of the projects we’ll have a chance to present at these venues. I’ll start with three semi-randomly selected papers. Read the rest of this entry »
At long last, Alex Fine‘s paper on syntactic adaptation expectation is about to appear in PLOS One. You can download the pre-proof from our academia.edu page (the final version will be linked there as soon as it’s available):
- Fine, A. B., Jaeger, T. F., Farmer, T. , and Qian, T. 2013. Rapid expectation adaptation during syntactic comprehension. PLoS One.
The paper presents a novel framework that ties together syntactic comprehension and implicit learning. We tie together work on expectation-based sentence understanding, syntactic priming in comprehension, statistical learning, and speaker-specificity in syntactic comprehension.In two self-paced reading studies, we show that readers rapidly adjust their expectations for specific syntactic structures to converge on the statistics of the current environment. They do so based on both previous experience and recent experience within the experiment. Read the rest of this entry »
Since more an more folks are running web-based experiments (typically, via Amazon’s Mechanical Turk or other platforms), I thought I’d put together a little sampling of demo experiments. We’ll keep updating this periodically, so feel free to subscribe to the RSS feed. Note that not all of the paradigms listed below have been developed by HLP Lab members (for credits, see below). We might release some of our paradigms for use by others soon. If you’re interested, please leave a comment below and subscribe to this page. This is the easiest way for us to make sure to keep you in the loop. Thank you for understanding. Read the rest of this entry »
And while I am at it, I may point out this sweet tool to run self-paced reading experiments via the WWW developed by Alex Drummond (many thanks to Carlos Gomez Gallo for pointing this software out to me). For an implemented example, see Masha Polinsky’s lab page. Also check this page on how this web-based self-paced reading paradigm has been tested with different keyboard setups.
I’ve been using a two-step approach, where in the first step I use all data (including fillers, but not practice items) of an experiment to fit a model of log-transformed raw reading times with:
- word length (Wlen)
- position of word in stimulus (Wpos)
- position of stimulus in list (Lpos) Read the rest of this entry »