This paper tackles the problem of passive gaze estimation using both event and frame (or 2D image) data. Considering the inherently different physiological structures, it is intractable to accurately estimate gaze purely based on a given state. Thus, we reformulate gaze estimation as the quantification of the state shifting from the current state to several prior registered anchor states. Specifically, we propose a two-stage learning-based gaze estimation framework that divides the whole gaze estimation process into a coarse-to-fine approach involving anchor state selection and final gaze location. Moreover, to improve the generalization ability, instead of learning a large gaze estimation network directly, we align a group of local experts with a student network, where a novel denoising distillation algorithm is introduced to utilize denoising diffusion techniques to iteratively remove inherent noise in event data. Extensive experiments demonstrate the effectiveness of the proposed method, which surpasses state-of-the-art methods by a large margin of 15<inline-formula><tex-math notation="LaTeX">$\%$</tex-math><alternatives><mml:math><mml:mo>%</mml:mo></mml:math><inline-graphic xlink:href="hou-ieq1-3581317.gif"/></alternatives></inline-formula>.
Modeling State Shifting via Local-Global Distillation for Event-Frame Gaze Tracking
Jiading Li,Zhiyu Zhu,Jinhui Hou,Junhui Hou,Jinjian Wu
Published 2024 in IEEE Transactions on Mobile Computing
ABSTRACT
PUBLICATION RECORD
- Publication year
2024
- Venue
IEEE Transactions on Mobile Computing
- Publication date
2024-03-31
- Fields of study
Computer Science
- Identifiers
- External record
- Source metadata
Semantic Scholar
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-90 of 90 references · Page 1 of 1
CITED BY
Showing 1-4 of 4 citing papers · Page 1 of 1