Some examples of web-based experiments

Posted on Updated on


Since more an more folks are running web-based experiments (typically, via Amazon’s Mechanical Turk or other platforms), I thought I’d put together a little sampling of demo experiments. We’ll keep updating this periodically, so feel free to subscribe to the RSS feed. Note that not all of the paradigms listed below have been developed by HLP Lab members (for credits, see below). We might release some of our paradigms for use by others soon. If you’re interested, please leave a comment below and subscribe to this page. This is the easiest way for us to make sure to keep you in the loop. Thank you for understanding.

  • Priming, adaptation, and learning
    • Reading-to-writing syntactic priming (Experiment by Camber Hansen-Karr [formerly HLP Lab], JavaScript and server side scripting by Andrew Watts [HLP Lab])
    • Phonetic adaptation (Experiment and code by Dave Kleinschmidt [HLP Lab], stimuli by Jean Vroomen [Tilburg University])
    • Lexical adaptation (coming soon, Experiment by Ilker Yildrim [Robbie Jacob’s Lab, Rochester] and Judith Degen [MTanLab, Rochester], pyjamas code by Ilker Yildrim).
      • If you’re interested in the python-based package for python-to-javascript conversation, which comes with a variety of widgets that are useful for creating web-based experiments, make sure to read this recent HLP Lab post by Judith Degen on the pyjamas packages.
    • Artificial language learning (Experiment by Masha Fedzechkina [HLP Lab], stimuli by Lissa Newport, Flash applet by Harry Tily [TedLab, MIT], JavaScript by Andrew Watts [HLP Lab]). We have also piloted
  • Production:
    • Written sentence recall (Experiment by Florian Jaeger [HLP Lab], JavaScript and server side scripting by Andrew Watts [HLP Lab])
    • Spoken sentence recall (coming soon, WAMI recording interface by Ian McGraw [CSAIL, MIT], JavaScript and server side scripting by Andrew Watts [HLP Lab])
    • Picture naming (coming soon, WAMI recording interface by Ian McGraw [CSAIL, MIT], JavaScript and server side scripting by Andrew Watts [HLP Lab])
  • Comprehension:
    • Self-paced reading (run with modified IBex JavaScript code, JavaScript and server side scripting by Andrew Watts [HLP Lab]). An alternative that we are currently testing is a Flash-based applet by Hal Tily [TedLab, MIT]
    • Magnitude estimation (Experiment and code by Neal Snider [formerly HLP Lab, now Nuance Technology])

And while I am at it, let me also recommend


10 thoughts on “Some examples of web-based experiments

    plagwitz said:
    May 23, 2012 at 6:42 pm

    Thanks for the list – the self-paced reading link should read http://spellout.net/ibexexps/hlplab/rc.spr4-evelin12/experiment.html.

    Like

    Joe K. said:
    May 29, 2012 at 1:25 pm

    Thanks for the post, lots of good information here.

    I am a graduate student working in a neuro/psycholinguistics lab myself, and we are just starting up an MT account for our lab. We were wondering about something, though: what is your standard compensation rate for MT workers? We’ve heard that some labs pay MT workers the standard lab rate, but from what I understand from the MT overview you provided in your external links (Mason and Suri), it’s also very common to pay about half of minimum wage/standard lab rate. Do you notice a difference between quantity and/or quality of work done at different compensation rates? It seems from previous studies on the topic that above a certain threshold it doesn’t make much of a difference, but if you could offer any insight into this topic we’d be much obliged.

    Thanks for your time!

    Like

      tiflo said:
      May 29, 2012 at 2:25 pm

      Hi Joe,

      I hope others will chime in to see what the different standards are. I know that some labs pay as little as they can get away with (that’s probably around $2.50/h for a task that isn’t completely mind-numbing). When I though about what we should do in our lab, I was had three considerations:

      1) We can safe tax money and conduct more research if we keep the payment as small as possible, while still attracting sufficient participants.

      2) It seems ethically questionable to pay below minimum wage. The excuse that participation is voluntary is hardly a convincing knock-out argument against this point (that’s why we labor protection laws in some countries).

      3) According to the ethical guidelines for experiments with human participants, we are not actually supposed to pay participants. Rather we are supposed to reimburse them for the inconvenience experienced by being in an experiment.

      I think, ultimately, 3) is the most important point. For me 3) places an upper threshold on the amount we pay that is sensitive to the context in which we conduct experiments. For example, for our field work in Mexico, we paid participants 50 pesos/h (~$5). That was probably even a bit on the high side. For example, it should arguably not be more profitable for participants to take part in our experiments than it is to work (although I think that ideal is sometimes violated even for experiments conducted on American campuses).

      For Mechanical Turk this meant for me that we should pay something that is a competitive rate but not too high. For us this means that we’re aiming for $6/h for studies that are published (sometimes higher). For pilot data, we have paid less. I hope this helps. Let me know what you hear from others.

      Liked by 1 person

    Diego said:
    November 21, 2012 at 3:41 am

    Hello,
    really interesting topic, I have a question though. I would like to run a self-paced reading experiment on mechanical turk. Does anybody know if there is any study showing that we can accept the results of a web experiment with the same (or similar) confidence of the more traditional lab ones?

    thanks a lot

    Like

      tiflo said:
      November 22, 2012 at 12:13 pm

      Hi Diego,

      we’ve replicated a number of previous lab-based results online. I have to say I had less luck with IBex than with Hal Tily’s tool that allows you to convert Linger files for SPR experiments over the web. My impression is that results are considerably more noisy over the web. We typically run about twice as many people. But I don’t know of a systematic investigation of the relation between lab- and web-based SPR. Hal Tily (now at Nuance Technologies) might know.

      HTH,
      Florian

      Like

        Diego said:
        November 28, 2012 at 5:51 am

        Thanks Florian. I’ll keep in touch with Hal Tily and if I’ve got some useful information I’ll report it also here.

        cheers,

        Diego

        Like

    HONG MO KANG said:
    May 30, 2014 at 3:24 pm

    Thank you for the samples. I am a graduate student in psycholinguistics and trying to find a way of remote experiment. So this helps much!

    Liked by 1 person

    Ehsan said:
    June 1, 2020 at 9:38 am

    Hello
    I am a PhD student in Psycholinguistics, and I’ve been struggling to run an online self-paced reading task on PennController for Ibex. The thing is I don’t seem to know how to arrive at reaction times given the results. Would you mind sharing the codes for your self-paced reading task so maybe that could help me?

    Thanks

    Like

      tiflo responded:
      June 1, 2020 at 5:55 pm

      Dear Ehsan,

      I suspect that Ibex has changed a lot since 2012, and that our code wouldn’t do you much good. But for what it’s worth here’s the code:

      https://github.com/hlplab/rc.spr4

      You can find other spr-ibex experiments on our lab’s github page (https://github.com/hlplab), thanks to Andrew Watts, the former lab manager extraordinaire ;).

      Like

Questions? Thoughts?