Mechanical Turk

Another example of recording spoken productions over the web

Posted on Updated on

A few days ago, I posted a summary of some recent work on syntactic alignment with Kodi Weatherholtz and Kathryn Campell-Kibler (both at The Ohio State University), in which we used the WAMI interface to collect speech data for research on language production over Amazon’s Mechanical Turk.

JaegerGrimshaw_poster_v3-final corrected (after print)
Jaeger and Grimshaw (2013). Poster presented at AMLaP, Marseilles, France.

Read the rest of this entry »

Socially-mediated syntactic alignment

Posted on Updated on

The first step in our OSU-Rochester collaboration on socially-mediated syntactic alignment has been submitted a couple of weeks ago. Kodi Weatherholtz in Linguistics at The Ohio State University took the lead in this project together with Kathryn Campbell-Kibler (same department) and me.

Welcome screen with sound  check from our web-based speech recording experiment.
Welcome screen with sound check from our web-based speech recording experiment.

We collected spoken picture descriptions via Amazon’s crowdsourcing platform Mechanical Turk to investigate how social attitude towards an interlocutor and conflict management styles affected syntactic priming. Our paradigm combines Read the rest of this entry »

Running phonetic (adaptation) experiments online

Posted on Updated on

I’ve developed some JavaScript code that somewhat simplifies running experiments online (over, e.g., Amazon’s Mechanical Turk). There’s a working demo, and you can download or fork the source code to tinker with yourself. The code for the core functionality which controls stimulus display, response collection, etc. is also available in its own repository if you just want to build around that.

If you notice a bug, or have a feature request, open an issue on the issue tracker (preferred), or comment here with questions and ideas. And, of course, if you want to contribute, please go ahead and submit a pull request. Everything’s written in HTML, CSS, and JavaScript (+JQuery) and aims to be as extensible as possible. Happy hacking!

If you find this code useful for your purposes, please refer others to this page. If you’d like to cite something to acknowledge this code or your own code based on this code, the following is the paper in which we first used this paradigm:

  1. Kleinschmidt, D. F., and Jaeger, T. F. 2012. A continuum of phonetic adaptation: Evaluating an incremental belief-updating model of recalibration and selective adaptation. Proceedings of the 34th Annual Meeting of the Cognitive Science Society (CogSci12), 605-610. Austin, TX: Cognitive Science Society.

A more detailed journal paper is currently under review. If you’re interested, subscribe to this post and get the update when we post the paper here once it’s out (or contact me if you can’t wait).

Collect keyboard responses asynchronously in Javascript

Posted on Updated on

Some of our Mechanical Turk experiments are written in straight-up javascript, which gives you a lot of control and flexibility but at the expense of having to write some pretty basic functionality from scratch.  I recently was in a situation where I wanted to collect separate keyboard responses in different but possibly overlapping time windows: stimuli are coming in fast and on some of them, the subject needs to press the spacebar.  Rather than fix my design so that the response windows would never overlap, I decided to write a function that would collect a one-off keyboard response, asynchronously, meaning that other experiment control code can run behind it. Read the rest of this entry »

Some examples of web-based experiments

Posted on Updated on

Since more an more folks are running web-based experiments (typically, via Amazon’s Mechanical Turk or other platforms), I thought I’d put together a little sampling of demo experiments. We’ll keep updating this periodically, so feel free to subscribe to the RSS feed. Note that not all of the paradigms listed below have been developed by HLP Lab members (for credits, see below). We might release some of our paradigms for use by others soon. If you’re interested, please leave a comment below and subscribe to this page. This is the easiest way for us to make sure to keep you in the loop. Thank you for understanding. Read the rest of this entry »

Using pyjamas to program external Mechanical Turk experiments

Posted on Updated on

I recently set up my first external Mechanical Turk study. My greatest friend and foe in this process was pyjamas, a Python-to-Javascript compiler and Widget Set API. THE great advantage of using pyjamas: you can program your entire experiment in python, and pyjamas will create the browser-dependent javascript code. If you already know javascript, writing your experiment in python without having to worry about browser-dependent issues will save you time. And if you don’t, you don’t have to go through the frustrating process of learning javascript. On the downside, the documentation for pyjamas is currently not very good, so figuring out how to get things to work can take a while.

That’s why I’m providing the (commented) code that I generated to create my MechTurk experiment. A short demo version of the experiment can be found here.

If you find this code helpful, please consider acknowledging it via the following URL in your paper/presentation to spread the word:
https://hlplab.wordpress.com/2011/12/25/using-pyjamas-to-program-external-mechanical-turk-experiments/

A screenshot of the experiment. Participants were asked to rate on a 7-point scale how natural the statement they heard was as a description of the scene.

Read the rest of this entry »

Fine grained linguistic knowledge, CUNY poster

Posted on

And here is one more poster from CUNY. This one is work by Robin Melnick at Stanford together with Tom Wasow. Robin ran forced-choice and 100-point-preference norming experiments on that-mentioning in relative and complement clauses to investigate the extent to which the factors that affect processing correlate with the factors affecting acceptability judgments. Going beyond previous work, he actually directly correlates the effect sizes of individual predictors in the processing and acceptability models. All experiments were run both in the lab and over the web using MechanicalTurk.

Self-paced reading via WWW

Posted on Updated on

And while I am at it, I may point out this sweet tool to run self-paced reading experiments via the WWW developed by Alex Drummond (many thanks to Carlos Gomez Gallo for pointing this software out to me). For an implemented example, see Masha Polinsky’s lab page. Also check this page on how this web-based self-paced reading paradigm has been tested with different keyboard setups.

Mech Turk and Written Recall

Posted on

Ah, we’ve just got back the first result of two studies that used a written recall paradigm via Mechanical Turk to test a couple of predictions of Uniform Information Density. You can see an example template of a written recall procedure here (JavaScript required). Each study took about 1 day for the equivalent of 20 participants (balanced across 4 lists) at $.02 per trial plus some boni (see below).

The next step is to implement a spoken recall paradigm. If anyone out there has already done that, let me know.

We also tested progressive payment as a way to elicit more balanced data sets. Whereas normal MechTurk data sets exhibit Zipf distributions with regard to the trials per participant, a simple progressive scheme ($.20 for at least 20 trials, $.50 for at least 40 trials, etc.) worked quite well to drastically increase the percentage of data that comes from participants who’ve done the entire experiment.

Furthermore, HLP lab manager Andrew Watts has written a little script that makes sure that each item gets only seen in one condition by each participant and that conditions are counterbalanced across participants (worker IDs). We’re still working on some details, but once it’s ready for prime time, we’ll share it here.