Assisted mapping of gaze data for recorded with the Mobile Testing Accessory

Mobile Testing Accessory eye gaze gaze pattern analysis

Mobile testing devices, such as the Tobii Pro Mobile Testing Accessory, assist you in producing eye gaze data using your mobile device as the stimuli. Together with a screen-based eye tracker and scene camera, gaze data can be acquired. The Gaze data can be mapped onto a still image (snapshot) of a webpage or app. Mapping can be done in two different ways, using manual mapping or assistive mapping. Data from a recording can be mapped onto one or several images. These images are used for generating visualizations, such as heatmaps and gaze plots, and Areas Of Interest.

To use Assisted mapping with mobile recordings it is highly important to take correct screenshots. Errors in the screenshot relative to the recording may influence the precision of assisted mapping.

How to gather correct full-page screenshots of webpages:

1. Open the desired webpage within Google Chrome browser.

2. Press F12 to open the developer tools (See Image 1).

3. Click the toggle device toolbar icon (Highlighted in Image 1).

mobile testing accessory learn page developer tools

Image 1: Chrome Developers Tools

4. Select the desired device from the dropdown menu (named responsive in image 2). If your device is not listed select Edit… to open a more comprehensive list of devices.

5. Use the chrome built in functionality (Image 2) or a Chrome extension to capture a full-length screenshot. Two extensions have been tested: Full Page Screen Capture and Fireshot. While both perform well, some are suited better to different use cases. To cover every eventuality, it is recommended to use both. Full Page is quicker and captures the page with one click. Fireshot is good for capturing the visible part of the screen only, which is recommended if you wish to capture pop-ups and do not wish to capture the entire page.

 

mobile testing accessory learn page developer tools

Image 2: Chrome device list and screenshot options

How to map data onto the Screenshot manually:

1. In the Gaze Data section of the Tools panel, select the Snapshots tab.

2. If not already enabled, enable Mapping by toggling on Show snapshot in the Gaze data section.

3. Enable or disable the Automatically step to next fixation toggle switch.

Enabling this switch will cause the paused replay to automatically jump to the next fixation/raw data point on the Timeline when a gaze point has been manually mapped. This eliminates the need to use arrow keys to step forward manually on the timeline.

4. Import snapshots by pressing the plus sign “+”under Snapshot images. Select the image you like to import and press Open.

5. In the grid/list under Snapshot images, select the Snapshot onto which you want to map data.

You can also select which snapshot to map data onto from the list of snapshots located below the replay Timeline. On the Timeline, each snapshot is represented by a thumbnail as well as a row on which it will be displayed for which parts of the recording data have been mapped. At any time during the mapping of data, you can switch back and forth between different Snapshots without losing mapped data.

6. Locate and pause the video at the start of the section that you want to map onto the selected Snapshot.

7. To map data onto the Snapshot, first, locate the gaze data point (the circle superimposed on the video) in the recorded video. Click once in the corresponding location on the snapshot image as precisely as possible.

8. Continue this process until all data has been mapped onto the active Snapshot. As data points are mapped onto the Snapshot, the Snapshot timeline will indicate at which times data points have been mapped.

9. Replay or manually step through the recording using the arrow keys once the mapping is completed and compare the mapping on the Snapshot with the gaze locations in the video to verify that data has been mapped correctly.

10. To move a mapped point, right-click it and select Delete current manually mapped fixation point in the menu. Then click on the Snapshot to map the gaze point in a new location.

How to map data onto a Snapshot or Screenshot using the assisted mapping algorithm:

1. In the Gaze Data section of the Tools panel, select the Snapshots tab

2. If not already enabled, enable Mapping by toggling on Show snapshot in the Gaze data section.

3. Import snapshots by pressing the plus “+” button under Snapshot images. Select the image you like to import and press Open.

4. In the grid/list of Snapshot images, select the snapshot onto which you want to map data.

5. Select the interval on the Timeline you want the gaze points to be to mapped automatically in by drag the yellow handles on either side of the red track slider to where you want the start and the end of the interval to be.

If needed, you can zoom in on the timeline to make the interval selection easier. This is most often the part of the recording where the location or object shown on the Snapshot comes into view.

6. Right-click on the selected interval or click the ellipsis (...) located directly over the timeline, and select Run assisted mapping.

The interval is now placed in the processing queue. The algorithm starts processing the mapping automatically according to the order in the processing queue. If another mapping is already in progress, that mapping will be completed before the next one is initiated. You can check the jobs placed in queue by clicking the number at the top right of the window.

7. You can choose to create another mapping task by repeating steps 4 to 6 and place it in the processing queue, or, if you don’t have any more pending tasks, continue to the next step.

When the assisted mapping is completed, a diagram is added to the section of the recording for which the mapping has been done on the row representing the Snapshot under the Timeline. The diagram indicates how confident the algorithm is about the similarity of the gaze point in the recording and the mapped position in the Snapshot. A high value indicates high similarity, and a low value, a low similarity level. A low similarity level does not necessarily mean that the data is incorrectly mapped, just that the algorithm had less information on which to base the mapping. Therefore, it is labeled as less similar.

Review sections with low similarity. Sections above the threshold will be marked in green and sections below the threshold will be marked in orange for easier identification.

If necessary, adjust the similarity threshold, in the tool panel on the right, to a level that fits the requirements of your project and, or re-mapped manually (if incorrect mappings are found).

Replay or manually step through the recording using the arrow keys once the mapping is completed and compare the mapping on the Snapshot with the gaze locations in the video to verify that data has been mapped correctly.

How to re-map gaze points manually, for each point or fixation:

1. Playback your recording until the gaze or fixation point is visible.

2. Delete the point by pressing the Delete button on the keyboard or by right-clicking the point on the Snapshot and selecting Delete current automatically mapped fixation point in the dialog.

3. Change the location by manually clicking the snapshot where it should be, just like you learned in the manual mapping section; or leave it “as is” by clicking the Accept button or typing C on your keyboard.

4. Replay or manually step through the recording using the arrow keys once the mapping is completed and compare the mapping on the snapshot with the gaze locations in the video to verify that data has been mapped correctly.

Mapped gaze/fixation point color-coding:

In the row representing the snapshot under the timeline, manually remapped gaze points and fixations appear as solid green.

In the Snapshot image, automatically-generated mappings appear in a green circle, whereas manually mapped points appear as a red circle.

In the Snapshot image, deleted points mapped by the assisted mapping algorithm appear as a gray circle.

Snapshot considerations when using the assisted mapping algorithm.

For the assisted mapping algorithm to be able to interpret the snapshot images correctly, there are a few things you should consider when you select the picture you want to use as a reference (the snapshot).

The algorithm compares the snapshot with the picture frames in the recording. For this procedure to work correctly, we recommend the snapshot you use is as similar as possible to the image in the recording. Make sure it is the same length and nothing is missing. Missing parts will highly influence the procedure.