This is federal funds well-spent: after the CUNY Sentence Processing Conference, Daniel Pontillo reaches out to the broader public and explains –to a captive audience of night owls at packed IHOP– how eye-tracking data allows us to test how we process the world (the poster is on implicit naming, or rather the lack thereof, in visual world experiments). The presentation was a resounding success. One member of an underrepresented minority was likely recruited for a research career in the cognitive sciences. A brawl that later ensued on the same premises stands in no relation to this presentation, in which only waffles were harmed. Science never stops. We are grateful for all feedback received from IHOPers during the poster presentation.
(Disclaimer: federal funds were only used to print the poster, which was first presented at the Sentence Processing Conference.)
I’ve been working on consolidating all the different R functions I’ve written over the years for plotting my eye-tracking data and creating just one amazing super-function (based on the ggplot2 package) that can do it all. Here’s a first attempt that anybody with the right kind of dataset should be able to use to create plots like the ones below (generated from fake data. The R code that generates the data is included at the end of the post). If you find this code helpful, please consider acknowledging it via the following URL in your paper/presentation to spread the word:
Recently, HLP lab was given the opportunity to evaluate the ASL MobileEye wearable eye tracking system and its accompanying analysis software, GazeMap. ASL generously offered to lend us the system for a month so that we could evaluate it in a set of short experiments. We attempted to put together simplified and shortened versions of four different experimental paradigms. In choosing these, we sought a variety that would tax different aspects of both the hardware and the analysis software.
After finishing the trial period, we compiled our feedback and sent it to ASL. They promptly responded with answers to our concerns and proposed solutions to many of the points that we made (see below). Since then, they have confirmed to us that their newest version includes a number of changes that directly addressed our comments.
This post contains a description of our testing methods, the results, and the feedback we sent back to ASL, modified only to reflect the changes that they claim to have made in their newest version. The primary goal of the tests we conducted was to determine what GazeMap and the MobileEye can do that might be of interest to psycholinguists. We’re particularly excited about Experiment 4 below. Read the rest of this entry »