Classifying raw data into relevant eye movements is an important process of eye tracking research. This page will give you a short overview of how eye movements are classified in Tobii's eye tracking analysis softwares - Tobii Studio and Tobii Pro Lab.
During a recording an eye tracker collects raw eye movement data points according to its sampling rate (either 30, 60, 120 or 300Hz in the case of Tobii Pro Eye Trackers). Each data point is identified by a timestamp and x, y coordinates and sent to the analysis application (e.g. Tobii Pro Studio, Tobii Pro Lab or an application using Tobii Pro SDK APIs) database running on the computer connected to the eye tracker. In order to visualize and interpret the data these raw data points are further processed into attentional eye movements, such as fixations, and overlaid on the stimuli used in the test. Aggregating data points into relevant eye movements significantly reduces the amount of eye tracking data to process, allowing the researcher to focus on the measures relevant to the research question. Another function of the filters is to check if the sample points are valid, e.g. discarding the points with no eye position data, or where the system has only recorded one eye and failed to identify whether it is the left or the right eye and is unable to estimate the final gaze point.
Tobii Pro Studio has three different types of fixation filters to group the raw data into fixations and Tobii Pro Lab uses one type of fixation filter to process the data. These filters are composed of algorithms that calculate whether raw data points belong to the same fixation or not. The basic idea behind these algorithms is that if two gaze points are within a pre‐defined minimum distance from each other (Tobii Fixation and ClearView Fixation Filter), or possess a speed below a certain threshold (Tobii I-VT Filter), then they should be allocated to the same fixation—in other words, the user has kept the eyes relatively still between the two sampling points.