Eye tracking is a key sensing technology for wearable glasses, required to enable application display enhancement methods. State-of-the-art eye tracking systems rely on video oculography (VOG), which uses a camera-based system to infer the user’s gaze from a sequence of images. The update rate of VOG systems is often limited by the high-power consumption of the camera sensors and the image processing algorithms needed to estimate the user’s gaze from captured images.
These video-based systems lack a high update rate, integrateability, and slippage robustness. To overcome these limitations, in an article published in IEEE Sensors Journal, researchers have proposed a model-based fusion algorithm to estimate gaze from multiple static laser feedback interferometry (LFI) sensors. The system can measure distance toward the eye and the eye’s rotational velocity.
The article reviews related work, including geometric model-based gaze estimation algorithms and using the LFI sensor in a near-eye setting. The authors then introduce the static LFI sensor modality for gaze estimation and propose a slippage-robust, calibration-free gaze estimation algorithm.
Gaze Reconstruction Algorithm
The general structure of the gaze estimator consists of three individual stages:
Region of Intersection Classification:
For each laser, determine the region of intersection with the eye (none, sclera, iris, and retina)
LFI-Sensor Position and Pose Estimation:
To estimate the gaze angle, it is necessary to determine the absolute position and orientation of the laser sensors mounted to the glasses. According to the paper, this is computed from the known sensor position in the glasses frame and the (scalar) distance measurement. Collecting measurements of all lasers over time in which the glasses’ position is constant constitutes a nonlinear system of equations. The authors note that although nonlinear, one can solve this root-finding problem utilizing a Levenberg–Marquardt algorithm.
Gaze Angle Estimation:
Having carried out the classification of measurements and estimation of absolute sensor position, the final step is the actual gaze estimation. The main idea is to estimate and integrate the angular velocity of the eye.
The researchers provide equations for finding the desired offset parameters for the gaze.
Evaluating the Approach
The researchers dedicate a significant portion of the paper to evaluating the proposed static LFI gaze estimation approach, including the LFI sensor measurement noise characterization, the region of intersection classification accuracy, and the gaze estimation accuracy in the presence of glasses slippage.
The proposed system is ambient light robust and robust to glasses slippage. During the evaluation, a gaze accuracy of 1.79° at an outstanding update rate of 1 kHz is achieved. At the same time, the sensors consume only a fraction of the power compared with the state-of-the-art video-based system. According to the article, the proposed LFI eye tracking approach can track the eye in the presence of glasses slippage. Furthermore, the sensor fusion and gaze estimation approach can estimate the eye’s optical axis without additional calibration, which is critical to artificial reality (AR) glasses and improves user experience overall.
The researchers point out that the main limitation of the proposed LFI eye tracking system is that the surface of the eye must be hit by the laser beams to measure the rotational velocity of the eye. It is possible that incorrect velocity measurements stemming from the lashes or lids could lead to a gaze estimation error. Future plans for the researchers include investigating the achievable gaze estimation accuracy and combining the proposed multi-LFI system with a low frame rate camera sensor.
Interested in acquiring full-text access for your entire organization? Full articles available with purchase or subscription. Contact us to see if your organization qualifies for a free trial.