The summer conference season is coming up and HLP Lab, friends, and collaborators will be presenting their work at CMCL (Baltimore, joint with ACL), ACL (Baltimore), CogSci (Quebec City), and IWOLP (Geneva). I wanted to take this opportunity to give an update on some of the projects we’ll have a chance to present at these venues. I’ll start with three semi-randomly selected papers.
Esteban Buz (BCS, Rochester) is investigating hyperarticulation during isolated word production in a novel web-based paradigm, recording speech via Mechanical Turk (thanks to Ian McGraw and Andrew Watts for making this work). Replicating work by Kirov and Wilson, Esteban finds hyperarticulation of voicing contrasts for contextually confusable words. Critically, he also finds that these hyperarticulations are not predicted by changes in the difficulty of articulating, contrary to proposals by Bell et al. (2009) and Gahl et al. (2012). This calls into question hypotheses that attribute hypo- and hyper-articulation to processes in production planning. Rather, we argue the results are compatible with accounts that hold that speakers (also) modulate the amount of articulatory detail provided in order to balance production effort and intelligibility (cf. Lindblom, 1990; Jaeger, 2013). It’d be interesting to see this pitched against competition-based accounts of these articulatory differences (e.g., Baese-Berk et al., 2009 and, I hear, further work under review by Kirov and Wilson).
- Buz, E., Jaeger, T. F., and Tanenhaus, M. K. 2014.Contextual confusability leads to targeted hyperarticulation. In TBA (eds.) Proceedings of the 36th Annual Meeting of the Cognitive Science Society (CogSci14), TBA. Austin, TX: Cognitive Science Society.
What determines the difficulty we experience during incremental sentence understanding (parsing)? Tal Linzen (Linguistics, NYU) pitches surprisal (Hale, 2001; Levy, 2008) and entropy reduction (Hale, 2003), as well as other related hypotheses, against each other, as a measure of word-by-word reading difficulty. It’s a tour-de-force and the results are somewhat ambiguous … it seems that both measure have unique contributions (although the entropy reduction effect is more ambiguous and more dependent on the specific approach taken). Congratulations to Tal. I uploaded his paper about 2-3 weeks ago and it has already been read 333 times. So do not read it (333 times is a nice number).
- Linzen, T. and Jaeger, T. F. 2014. Investigating the role of entropy in sentence processing. Proceedings of the Cognitive Modeling and Computational Linguistics Workshop at ACL, Baltimore, MD, June 26th, XXX-XXX.
Driven by Benjamin Van Durme (former Rochesterian, now at HLTCOE at JHU) and Alex Fine (now NIH post-doctoral fellow at Urbana-Champaign), we finally got one of our longer running projects out of the door, back then initiated by Austin Frank. In this paper, to be presented at ACL, we investigate the use of web-based ngram measures in predicting human behavior during language processing — either pitched against more traditional sources or in order to complement them:
- Fine, A. B., Frank, A., Jaeger, T. F., and Van Durme, B. 2014. Biases in Predicting the Human Language Model. Proceedings of ACL 2014, Baltimore, MD, June 22nd-27th, XXX-XXX.
Feedback as always welcome. To be continued (3 more papers to come soon).