Time of Interest (TOI) is an analysis tool that provide an amazing degree of analytical flexibility. When used properly, it allows researchers and analysts to carve out periods of time during the test recording over which meaningful behaviors and events take place. These intervals can be the duration of a task, time spent performing a certain behavior (plus number of occurrences), or the periods containing data mapped on to snapshots.
You can use TOI’s to:
There are 2 different types of TOI data sources - automatic TOIs generated when mapping data to a snapshot mapping, and custom TOIs when manualy logging events.When you create a custom TOI, you define interval selection rules and a gaze data source. You can see all available TOI’s (both auto-generated and custom) on the Visualizations tab. When you have created a TOI and select it in the Visualization tools or export metrics for it (see above), it will look in all recordings in the project and find all the intervals defined by the chosen events.
Snapshot TOI’s are created automatically when eye tracking data is mapped onto that snapshot or stimuli. When you select a snapshot for analysis, your TOI will be composed of all the intervals defined by the two events ”Snapshot X interval start” and ”Snapshot X interval end”. The ”Snapshot X interval start” event is created by the first gazepoint/fixation mapped onto the snapshot. The ”Snapshot X interval end” event is created by the last gazepoint/fixation mapped on to the snapshot. If there is a gap of 5 s between two mapped gazepoints, a ”Snapshot X interval end” event is created for the last gazepoint/fixation before the gap, and a ”Snapshot X interval start” event is created on the gazepoint after the gap.
Watch the video below to see how set up TOIs, after that we’ll focus on an example of it's potential application.
As mentioned earlier, TOIs are used to isolate periods of time where things happen, things that are meaningful or important to the researcher. These periods of time can be associated with specific events (e.g., during the first visit to the navigation bar at the top of the page) or behaviors (e.g., gaze during the period after the first back and forth scan between two targets). Alternatively, they can be used to organize gaze analysis into epochs or intervals or for time series analysis.
The example we will use to illustrate the use of TOIs is based on a well known psycholinguistics research paradigm called the "visual world". This paradigm is broadly used in developmental psycholinguistics to study language acquisition and in adults to study language processing. In our study, a video composed of an image with 4 objects and an auditory sentence related to the objects is presented to a subject. The test subjects were asked to look at the image and listen to the sentence. And what we want to observe is whether we can tell something about how the sentence is processed by viewing the eye movement behavior of the test subject. The sentence and images were produced to create some ambiguity related to the size of the objects. The relative size of the object in the image can be congruent or incongruent with our "real-world" concept of the object size (e.g. a hippopotamus in real life is smaller than a train, but in our image the train is smaller - "incongruent"; the mouse on the other hand is smaller than the tree both in real life and on the image - "congruent"). The prediction here is that if the participant’s visual behavior is guided by their expectation drawn from the real world, once they hear the word "larger" or "smaller", they should look first at the objects that are usually larger or smaller in size according to their knowledge from the real world, however if their behavior is based on the visual information they are exposed to, then they should look first at the larger or smaller object in the image irrespective of their real world experience. So in the end what we want to know is when the words "larger" and "smaller" are heard where does the participant look first.
In order to investigate this we start by drawing AOIs around each of the four objects. Once AOIs are set up, gaze metrics are calculated for the entire duration of the exposure by default, i.e. for the entire video. However, in our study to get to the core of our question we want to look at the behavior of the subject, not from the start of the video but from the onset of the words "larger" and "smaller". In the example video, the word "larger" is produced at 4,153 seconds into the video (the sentence is "The hippopotamus is larger than the train"). We can then use this information to manually log an event called "onset larger", and together with the automaticaly created "VideoStimulusEnd" event create my TOI, and thus analyse the data from the period of time relevant to my research question and predictions.
Times of Interest is a highly flexible data analytic tool in Pro Lab. Together with Areas of Interest, they provide useful and fine-grained capabilities for defining not only the spatial extent of your analyses (AOIs) but also the temporal span (TOIs). Used appropriately and with care, researchers can apply these tools to carry out powerful, sophisticated analyses of the most demanding stimulus presentations.