eye-tracking

now, this is broader impact

Posted on Updated on

This is federal funds well-spent: after the CUNY Sentence Processing Conference, Daniel Pontillo reaches out to the broader public and explains –to a captive audience of night owls at packed IHOP– how eye-tracking data allows us to test how we process the world (the poster is on implicit naming, or rather the lack thereof, in visual world experiments). The presentation was a resounding success. One member of an underrepresented minority was likely recruited for a research career in the cognitive sciences. A brawl that later ensued on the same premises stands in no relation to this presentation, in which only waffles were harmed. Science never stops. We are grateful for all feedback received from IHOPers during the poster presentation.

(Disclaimer: federal funds were only used to print the poster, which was first presented at the Sentence Processing Conference.)

Dan Pontillo gives an impromptu poster presentation at the IHOP around 2-something a.m., Columbia, S.C.

Creating spaghetti plots of eye-tracking data in R

Posted on Updated on

I’ve been working on consolidating all the different R functions I’ve written over the years for plotting my eye-tracking data and creating just one amazing super-function (based on the ggplot2 package) that can do it all. Here’s a first attempt that anybody with the right kind of dataset should be able to use to create plots like the ones below (generated from fake data. The R code that generates the data is included at the end of the post). If you find this code helpful, please consider acknowledging it via the following URL in your paper/presentation to spread the word:
https://hlplab.wordpress.com/2012/02/27/creating-spaghetti-plots-of-eye-tracking-data-in-r/

Left: Empirical means with error bars indicating standard error for four experimental conditions. Contrast presence is coded in color, adjective type in line type. The first vertical line indicates adjective onset, the second ones indicate mean noun onset in each contrast condition. Right: Smoothed model estimates of proportions in each condition, with ribbons indicating 95% confidence intervals. Data from different subjects is plotted in different panels.

Read the rest of this entry »

Evaluation of ASL MobileEye and GazeMap

Posted on Updated on

Recently, HLP lab was given the opportunity to evaluate the ASL MobileEye wearable eye tracking system and its accompanying analysis software, GazeMap. ASL generously offered to lend us the system for a month so that we could evaluate it in a set of short experiments. We attempted to put together simplified and shortened versions of four different experimental paradigms. In choosing these, we sought a variety that would tax different aspects of both the hardware and the analysis software.

After finishing the trial period, we compiled our feedback and sent it to ASL. They promptly responded with answers to our concerns and proposed solutions to many of the points that we made (see below). Since then, they have confirmed to us that their newest version includes a number of changes that directly addressed our comments.

This post contains a description of our testing methods, the results, and the feedback we sent back to ASL, modified only to reflect the changes that they claim to have made in their newest version. The primary goal of the tests we conducted was to determine what GazeMap and the MobileEye can do that might be of interest to psycholinguists. We’re particularly excited about Experiment 4 below. Read the rest of this entry »