online experiments

Using pyjamas to program external Mechanical Turk experiments

Posted on Updated on

I recently set up my first external Mechanical Turk study. My greatest friend and foe in this process was pyjamas, a Python-to-Javascript compiler and Widget Set API. THE great advantage of using pyjamas: you can program your entire experiment in python, and pyjamas will create the browser-dependent javascript code. If you already know javascript, writing your experiment in python without having to worry about browser-dependent issues will save you time. And if you don’t, you don’t have to go through the frustrating process of learning javascript. On the downside, the documentation for pyjamas is currently not very good, so figuring out how to get things to work can take a while.

That’s why I’m providing the (commented) code that I generated to create my MechTurk experiment. A short demo version of the experiment can be found here.

If you find this code helpful, please consider acknowledging it via the following URL in your paper/presentation to spread the word:

A screenshot of the experiment. Participants were asked to rate on a 7-point scale how natural the statement they heard was as a description of the scene.

Read the rest of this entry »

Mech Turk and Written Recall

Posted on

Ah, we’ve just got back the first result of two studies that used a written recall paradigm via Mechanical Turk to test a couple of predictions of Uniform Information Density. You can see an example template of a written recall procedure here (JavaScript required). Each study took about 1 day for the equivalent of 20 participants (balanced across 4 lists) at $.02 per trial plus some boni (see below).

The next step is to implement a spoken recall paradigm. If anyone out there has already done that, let me know.

We also tested progressive payment as a way to elicit more balanced data sets. Whereas normal MechTurk data sets exhibit Zipf distributions with regard to the trials per participant, a simple progressive scheme ($.20 for at least 20 trials, $.50 for at least 40 trials, etc.) worked quite well to drastically increase the percentage of data that comes from participants who’ve done the entire experiment.

Furthermore, HLP lab manager Andrew Watts has written a little script that makes sure that each item gets only seen in one condition by each participant and that conditions are counterbalanced across participants (worker IDs). We’re still working on some details, but once it’s ready for prime time, we’ll share it here.