A few days ago, I posted a summary of some recent work on syntactic alignment with Kodi Weatherholtz and Kathryn Campell-Kibler (both at The Ohio State University), in which we used the WAMI interface to collect speech data for research on language production over Amazon’s Mechanical Turk.
The first step in our OSU-Rochester collaboration on socially-mediated syntactic alignment has been submitted a couple of weeks ago. Kodi Weatherholtz in Linguistics at The Ohio State University took the lead in this project together with Kathryn Campbell-Kibler (same department) and me.
We collected spoken picture descriptions via Amazon’s crowdsourcing platform Mechanical Turk to investigate how social attitude towards an interlocutor and conflict management styles affected syntactic priming. Our paradigm combines Read the rest of this entry »
If you find this code useful for your purposes, please refer others to this page. If you’d like to cite something to acknowledge this code or your own code based on this code, the following is the paper in which we first used this paradigm:
- Kleinschmidt, D. F., and Jaeger, T. F. 2012. A continuum of phonetic adaptation: Evaluating an incremental belief-updating model of recalibration and selective adaptation. Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci12), 605-610. Austin, TX: Cognitive Science Society.
A more detailed journal paper is currently under review. If you’re interested, subscribe to this post and get the update when we post the paper here once it’s out (or contact me if you can’t wait).
Since more an more folks are running web-based experiments (typically, via Amazon’s Mechanical Turk or other platforms), I thought I’d put together a little sampling of demo experiments. We’ll keep updating this periodically, so feel free to subscribe to the RSS feed. Note that not all of the paradigms listed below have been developed by HLP Lab members (for credits, see below). We might release some of our paradigms for use by others soon. If you’re interested, please leave a comment below and subscribe to this page. This is the easiest way for us to make sure to keep you in the loop. Thank you for understanding. Read the rest of this entry »
That’s why I’m providing the (commented) code that I generated to create my MechTurk experiment. A short demo version of the experiment can be found here.
If you find this code helpful, please consider acknowledging it via the following URL in your paper/presentation to spread the word:
And while I am at it, I may point out this sweet tool to run self-paced reading experiments via the WWW developed by Alex Drummond (many thanks to Carlos Gomez Gallo for pointing this software out to me). For an implemented example, see Masha Polinsky’s lab page. Also check this page on how this web-based self-paced reading paradigm has been tested with different keyboard setups.
The next step is to implement a spoken recall paradigm. If anyone out there has already done that, let me know.
We also tested progressive payment as a way to elicit more balanced data sets. Whereas normal MechTurk data sets exhibit Zipf distributions with regard to the trials per participant, a simple progressive scheme ($.20 for at least 20 trials, $.50 for at least 40 trials, etc.) worked quite well to drastically increase the percentage of data that comes from participants who’ve done the entire experiment.
Furthermore, HLP lab manager Andrew Watts has written a little script that makes sure that each item gets only seen in one condition by each participant and that conditions are counterbalanced across participants (worker IDs). We’re still working on some details, but once it’s ready for prime time, we’ll share it here.