Digging into intervals and Times of Interest

Tobii Pro Lab

Times of Interest (or simply TOIs) is a Pro Lab concept that provides an amazing degree of analytical flexibility. When used properly, it allows researchers and analysts to organize the recording data according to intervals of time during which meaningful behaviors and events take place.

Examples of TOI use:

  • selecting data for vizualizations such as heat maps and gaze plots
  • selecting and aggregating data to calculate metrics associated to a coded task or subtask periods in your recording (recording task or trial analysis)
  • selecting and aggregating data to calculate metrics associated to coded subject behaviors or actions (behavioral coding)
  • selecting and aggregating data associated to media exposure and snapshot coverage (media or snapshot restricted task or trial analysis)

Times of Interest are created by specifying events and intervals, either of which can be system-generated or user-generated.    

Events

Events are multipurpose markers that can be used to locate important occurrences on the eye tracking timeline. Events as markers can be generated automatically by Pro Lab (RecordingStart, ImageStimulusStart, sync events), the participant (Keypress), or the researcher/analyst (Custom events). Events have an associated timestamp, the exact time the event marker was applied. They can be counted but they do not have any duration since they simply mark a meaningful point in time. The terminology we will use is that the creation of a named event creates a class of events and any instance thereafter is referred to simply as an “event.” Once you have a few event markers, though, you can do more than just count them and that brings us to the next key concept, intervals.

Intervals

Intervals are spans of time on the recording timeline that have a start and end point. Just as with events, intervals can be generated automatically by the system or the experiment (using the corresponding system-generated events). For example, there will always be an interval that corresponds to the entire duration of the recording from start to finish. And if you are running a screen-based study where images as stimuli are shown to the participant, then the system will automatically generate an interval in the timeline of each experiment that spans the period of time when an image appears on the screen.

A helpful aspect of how Pro Lab implements intervals is that data on all intervals is available in the metrics outputs. This means that statistics on the duration and start/end point time stamps of all intervals can be obtained just by checking a box. There is no need anymore to carry out a comprehensive data export and sort through row upon row of data to obtain this basic and useful information.

Putting the concepts together

If this is your first time reading about these new concepts, there is likely to be some degree of confusion. To help pull these ideas together, we have put them all on a representative study timeline  that is a schematic representation of what you would find in Pro Lab (see image below).

This diagram presents a study recording that lasts 64 seconds. Thus, the timeline starts at 0 and ends at 64. The corresponding system-generated events at 0 and 64 are RecordingStart and RecordingEnd (blue and green event markers respectively) and they automatically create a TOI entitled "Entire Recording". Since those events only ocurr once in the recording, this TOI is composed of a single interval that spans the whole Recording,. 

Additionally, there are two user-created events StartEvent and EndEvent (purple and orange event markers respectively). This pair of events is applied three times on the timeline at 5 and 7, 12 and 18, and 37 and 52 seconds.

We then define a custom TOI, by setting StartEvent as the Start point and EndEvent as the End point. Now whenever StartEvent is followed by EndEvent, Pro Lab will automatically create an interval (here, Intervals 1, 2, 3) and assign it to the TOI. Keep in mind that while the study or timeline clock runs continuously from start to end, the custom time of interest “clock” only runs, adding to the total duration of the TOI, whenever an interval as we have defined it is created.

intervals and Times Of Interest

Types of Times of Interest

According to how they are generated, we can divide TOIs in two general types -  System-generated TOIs (based on recording, media and snapshot events), and custom TOIs created manually by the user. 

System-generated TOIs

Depending on the type of project you are running - Screen, Glasses and Scene Camera, Pro Lab will automatically generate different AOIs. On the table below shows you which types of system-generated TOIs are created in recordings of different project types.  

 Project type
System-generated TOI IntervalsEventsScreen GlassesScene Camera
Recording Whole recording RecordingStart, RecordingStop yes yes yes
Media Images and videos ImageStimulusStart, ImageStimulusStop; VideoStimulusStart, VideoStimulusStop yes no no
Snapshot* Intervals mapped to a snapshot Snapshot[name] Interval Start, Snapshot[name] Interval End no yes no

 

*Snapshot TOI’s are created automatically when eye tracking data is mapped onto that snapshot or stimuli. When you select a snapshot for analysis, your TOI will be composed of all the intervals defined by the two events ”Snapshot[name] Interval Start” and ”Snapshot[name] Interval End”. The ”Snapshot[name] Interval Start” event is created by the first gazepoint/fixation mapped onto the snapshot. The ”Snapshot[name] Interval End” event is created by the last gazepoint/fixation mapped on to the snapshot. If there is a gap of 5 s between two mapped gazepoints, a ”Snapshot[name] Interval End” event is created for the last gazepoint/fixation before the gap, and a ”Snapshot[name] Interval Start” event is created on the gazepoint after the gap.

Custom-generated TOIs or simply, custom TOIs

Custom Times of Interest (cTOI) are a new data analysis concept introduced in Pro Lab. cTOI are a flexible tool that allows the eye tracking researcher or data analyst to precisely define and assign portions of their experiment to be analyzed. As described in the prior section on intervals, cTOI are created by specifying the starting point and end point, either of which can be system-generated or user-generated events, the steps are simply:

  1. Create at least one user-generated event
  2. Create the cTOI in Pro Lab selecting one event as the start point and another as the end point
  3. Select a backing visual from the Media Library on which to show gaze data visualizations

The Media Library is the place where frames captured from the recording can be stored and called up for use as the backing visual for frame-based TOIs. 

Watch a tutorial

Watch the video above to see how to set up TOIs, after that we’ll focus on an example of it's potential application.

"The hippopotamus is larger than the train" - study example

As mentioned earlier, TOIs are used to isolate periods of time where things happen, things that are meaningful or important to the researcher. These periods of time can be associated with specific events (e.g., during the first visit to the navigation bar at the top of the page) or behaviors (e.g., gaze during the period after the first back and forth scan between two targets). Alternatively, they can be used to organize gaze analysis into epochs or intervals or for time series analysis. 

The example we will use to illustrate the use of TOIs is based on a well known psycholinguistics research paradigm called the "visual world". This paradigm is broadly used in developmental psycholinguistics to study language acquisition and in adults to study language processing. In our study, a video composed of an image with 4 objects and an auditory sentence related to the objects is presented to a subject. The test subjects were asked to look at the image and listen to the sentence. And what we want to observe is whether we can tell something about how the sentence is processed by viewing the eye movement behavior of the test subject. The sentence and images were produced to create some ambiguity related to the size of the objects. The relative size of the object in the image can be congruent or incongruent with our "real-world" concept of the object size (e.g. a hippopotamus in real life is smaller than a train, but in our image the train is smaller - "incongruent"; the mouse on the other hand is smaller than the tree both in real life and on the image - "congruent"). The prediction here is that if the participant’s visual behavior is guided by their expectation drawn from the real world, once they hear the word "larger" or "smaller", they should look first at the objects that are usually larger or smaller in size according to their knowledge from the real world. However if their behavior is based on the visual information they are exposed to, then they should look first at the larger or smaller object in the image irrespective of their real world experience. So in the end what we want to know is when the words "larger" and "smaller" are heard where does the participant look first.

In order to investigate this we start by drawing AOIs around each of the four objects. Once AOIs are set up, gaze metrics are calculated for the entire duration of the exposure by default, i.e. for the entire video. However, in our study to get to the core of our question we want to look at the behavior of the subject, not from the start of the video but from the onset of the words "larger" and "smaller". In the example video, the word "larger" is produced at 4,153 seconds into the video (the sentence is "The hippopotamus is larger than the train"). We can then use this information to manually log an event called  "onset larger", and together with the automaticaly created "VideoStimulusEnd" event create my TOI, and thus analyse the data from the period of time relevant to my research question and predictions.

Creating a Time of Interest

Times of Interest is a highly flexible data analytic tool in Pro Lab. Together with Areas of Interest, they provide useful and fine-grained capabilities for defining not only the spatial extent of your analyses (AOIs) but also the temporal span (TOIs). Used appropriately and with care, researchers can apply these tools to carry out powerful, sophisticated analyses of the most demanding stimulus presentations.

Related Articles