Manual and assisted mapping of gaze data on to snapshots and screenshots

Tobii Pro Lab Glasses projects Scene camera projects

Wearable eye tracking devices such as Tobii Pro Glasses 2 produce eye gaze data mapped to a coordinate system relative to the wearable eye tracker and the recorded video, not to static objects of interest in the environment around the participant wearing the eye tracker. For most statistical/numerical analysis to be meaningful, the collected eye tracking data needs to be mapped on to objects of interest and into a new coordinate system with its origin fixed in the environment around the participant.

The same scenario arises if you are performing a study using a remote eye tracker and a scene camera. Again, the data collected by the eye tracker is mapped to a coordinate system relative to scene camera video, not to the static objects or person of interest in front of the participant.

Tobii Pro Lab addresses this challenge by allowing the user to map gaze data onto still images (snapshots and screenshots) of the environments and target objects. Data from a recording can be mapped onto one or several images. These images are used for generating visualizations, such as heatmaps and gaze plots, and Areas Of Interest. The mapping can be done either entirely manually, or in an assisted way by using the assisted mapping function.

How to map data onto a Snapshot or Screenshot manually:

  1. If not already enabled, enable mapping by clicking the Snapshots switch just below the video display area.
  2. In the Gaze Data section of the Tools panel, select the Snapshots tab.
  3. Enable or disable the Automatically step to next fixation toggle switch. Enabling this switch will cause the paused replay to automatically jump to the next fixation/raw data point on the Timeline when a gaze point has been manually mapped. This eliminates the need to use arrow keys to step forward manually on the timeline.
  4. In the grid/list of Snapshot images, select the Snapshot onto which you want to map data. You can also select which Snapshot to map data onto from the list of Snapshots located below the replay Timeline. On the Timeline, each Snapshot is represented by a thumbnail as well as a row on which it will be displayed for which parts of the recording data have been mapped. At any time during the mapping of data, you can switch back and forth between different Snapshots without losing mapped data.
  5. While skimming through the recording replay, locate and pause the video at the start of the section that you want to map onto the selected Snapshot.
  6. To map data onto the Snapshot, first, locate the gaze data point (the circle superimposed on the video) in the recorded video. Click once in the corresponding location on the Snapshot image as precisely as possible.
  7. Continue this process until all data has been mapped onto the active Snapshot. As data points are mapped onto the Snapshot, the Snapshot timeline will indicate at which times data points have been mapped.
  8. Replay or manually step through the recording using the arrow keys once the mapping is completed and compare the mapping on the Snapshot with the gaze locations in the video to verify that data has been mapped correctly.

    To move a mapped point, right-click it and select Delete current manually mapped fixation point in the menu. Then click the Snapshot to map the gaze point in a new location.

How to map data onto a Snapshot or Screenshot using the assisted mapping algorithm:

  1. If not already enabled, enable mapping by clicking the Snapshots switch just below the video display area.
  2. In the Gaze Data section of the Tools panel, select the Snapshots tab.
  3. In the grid/list of Snapshot images, select the Snapshot onto which you want to map data. You can also select which Snapshot to map data onto from the list of Snapshots located below the replay Timeline
  4. Select the interval on the Timeline you want the gaze points to be to mapped automatically in. To select an interval, drag the yellow handles on either side of the red track slider to where you want the start and the end of the interval to be. If needed, you can zoom in on the timeline to make the interval selection easier. This is most often the part of the recording where the location or object shown on the Snapshot comes into view.
  5. Right-click on the selected interval or click the ellipsis (...) located directly over the timeline, and select Run assisted mapping. The interval is now placed in the processing queue. The algorithm starts processing the mapping automatically according to the order in the processing queue. If another mapping is already in progress, that mapping will be completed before the next one is initiated. You can check the jobs placed in queue by clicking the number at the top right of the window.
  6. You can choose to create another mapping task by repeating steps 4 to 6 and place it in the processing queue, or, if you don’t have any more pending tasks, continue to the next step.

    When the assisted mapping is completed, a diagram is added to the section of the recording for which the mapping has been done on the row representing the Snapshot under the Timeline. The diagram indicates how confident the algorithm is about the similarity of the gaze point in the recording and the mapped position in the Snapshot. A high value indicates high similarity, and a low value, a low similarity level. A low similarity level does not necessarily mean that the data is incorrectly mapped, just that the algorithm had less information on which to base the mapping. Therefore it is labeled as less similar.
  7. Review sections with low similarity. Sections above the threshold will be marked in green and sections below the threshold will be marked in orange for easier identification. 
  8. If necessary adjust the similarity threshold, in the tool panel on the right, to a level that fits the requirements of your project and, or re-mapped manually (if incorrect mappings are found).
  9. Replay or manually step through the recording using the arrow keys once the mapping is completed and compare the mapping on the Snapshot with the gaze locations in the video to verify that data has been mapped correctly.

 

How to re-map gaze points manually, for each point or fixation:

  1. Playback your recording until the gaze or fixation point is visible.
  2. Delete the point by pressing the Delete button on the keyboard or by right-clicking the point on the Snapshot and selecting Delete current automatically mapped fixation point in the dialog; change the location by manually clicking the Snapshot where it should be, just like you learned in the manual mapping section; or leave it “as is” by clicking the Accept button or typing C on your keyboard.
  3. Replay or manually step through the recording using the arrow keys once the mapping is completed and compare the mapping on the Snapshot with the gaze locations in the video to verify that data has been mapped correctly.

 

Mapped gaze/fixation point color-coding:

  • In the row representing the Snapshot under the Timeline, manually remapped gaze points and fixations appear
    as solid green.
  • In the Snapshot image, automatically-generated mappings appear in a green circle, whereas manually
    mapped points appear as a red circle.
  • In the Snapshot image, deleted points mapped by the assisted mapping algorithm appear as a gray circle.

 

Snapshot considerations when using the assisted mapping algorithm

For the assisted mapping algorithm to be able to interpret the snapshot images correctly, there are a few things you should consider when you select the picture you want to use as a reference (the snapshot).

The algorithm compares the snapshot with the picture frames in the recording from Pro Glasses 2. For this procedure to work as best as possible it is important that the scene in the snapshot is as ‘flat’ as possible. With ‘flat’ we mean that the scene should be as two-dimensional as possible in the sense that all objects in the images should be more or less on the same distance from your viewpoint and always visible, no matter your viewing angle. Imagine a grocery store shelf with lines of cans and cereal boxes; all items on the shelf will be visible even if you move a few meters to the left or to the right from your original position, only a bit skewed, but this is no problem as the algorithm can interpret the image anyway. There is no risk of an item ‘shadowing’ another item; this makes for a good reference snapshot as we never know where the participant will stand in front of the store shelf. In contrast, we can describe scenery which is much more three-dimensional — imagine a store desk with a cash register on it. To the left of the cash register and a little more to the back on the shelf, there is a can with pens. If we stand right in front of the desk, we see both the cash register and the can, the scene looks two-dimensional from this point of view; a photograph (snapshot) of this scene would make both items fully visible and the snapshot image would, in fact, be correct. As long as the images from the participant's recording are made from more or less the same position we have no problems. But, what if the participant stands a few meters to the right, so the can of pens is shadowed by the cash register, in effect the can will no longer be visible from that point in the recording and the algorithm will not be able to map the data correctly.

Related Articles

  • Assisted mapping of gaze data for recorded with the Mobile Testing Accessory

    Mobile testing devices, such as the Tobii Pro Mobile Testing Accessory, assist you in producing eye gaze data using your mobile device as the stimuli. Together with a screen-based eye tracker and scene camera, gaze data can be acquired. The Gaze data can be mapped onto a still image (snapshot) of a webpage or app. Mapping can be done in two different ways, using manual mapping or assistive mapping. Data from a recording can be mapped onto one or several images. These images are used for generating visualizations, such as heatmaps and gaze plots, and Areas Of Interest.

    Mobile Testing Accessory eye gaze gaze pattern analysis
  • Digging into intervals and Times of Interest

    Learn how to use Times of Interest to segment your data into meaningful periods of data analysis.

    Tobii Pro Lab Screen based projects Glasses projects
  • Performing a calibration and validation in Pro Lab

    Learn how to perform a calibration and calibration validation with Tobii Pro Lab.

    Tobii Pro Lab Screen Based Eye Trackers Screen based projects