Loading [MathJax]/extensions/MathZoom.js
Event-Based Eye Tracking. AIS 2024 Challenge Survey | IEEE Conference Publication | IEEE Xplore

Event-Based Eye Tracking. AIS 2024 Challenge Survey


Abstract:

This survey reviews the AIS 2024 Event-Based Eye Tracking (EET) Challenge. The task of the challenge focuses on processing eye movement recorded with event cameras and pr...Show More

Abstract:

This survey reviews the AIS 2024 Event-Based Eye Tracking (EET) Challenge. The task of the challenge focuses on processing eye movement recorded with event cameras and predicting the pupil center of the eye. The challenge emphasizes efficient eye tracking with event cameras to achieve good task accuracy and efficiency trade-off. During the challenge period, 38 participants registered for the Kaggle competition, and 8 teams submitted a challenge factsheet. The novel and diverse methods from the submitted factsheets are reviewed and analyzed in this survey to advance future event-based eye tracking research.
Date of Conference: 17-18 June 2024
Date Added to IEEE Xplore: 27 September 2024
ISBN Information:

ISSN Information:

Conference Location: Seattle, WA, USA
References is not available for this document.

1. Introduction

The fast development of augmented reality (AR) and virtual reality (VR) technologies in industry, has significantly magnified the importance of precise and efficient eye-tracking systems [15], [16]. Furthermore, eye-tracking and related tasks, including gaze detection, pupil shape detection, etc, have tremendous potential in the field of wearable health-care technology, offering novel approaches for diagnosing and monitoring conditions such as Parkinson’s and Alzheimer’s diseases through the analysis of eye movement patterns [14], [25], [31].

Select All
1.
Kaggle Competition for Event-based Eye Tracking -AIS2024 CVPR Workshop, [online] Available: https://www.kaggle.com/competitions/event-based-eye-tracking-ais2024.
2.
"MLflow: A Machine Learning Lifecycle Platform", [online] Available: https://mlflow.org/.
3.
"DVXplorer Mini User Guide", [online] Available: https://inivation.com/wp-content/uploads/2023/03/DVXplorer-Mini.pdf.
4.
Alessandro Aimar, Hesham Mostafa, Enrico Calabrese, Antonio Rios-Navarro, Ricardo Tapiador-Morales, Iulia-Alexandra Lungu, et al., "Nullhop: A flexible convolutional neural network accelerator based on sparse representations of feature maps", IEEE Transactions on Neural Networks and Learning Systems, pp. 644-656, 2017.
5.
Sami Barchid, José Mennesson and Chaabane Djéraba, "Bina-rep event frames: a simple and effective representation for event-based cameras", 2022.
6.
Qinyu Chen, Yan Huang, Rui Sun, Wenqing Song, Zhonghai Lu, Yuxiang Fu, et al., "An efficient accelerator for multiple convolutions from the sparsity perspective", IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 28, no. 6, pp. 1540-1544, 2020.
7.
Qinyu Chen, Chang Gao, Xinyuan Fang and Haitao Luan, "Skydiver: A spiking neural network accelerator exploiting spatio-temporal workload balance", IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 41, no. 12, pp. 5732-5736, 2022.
8.
Qinyu Chen, Zuowen Wang, Shih-Chii Liu and Chang Gao, "3et: Efficient event-based eye tracking using a change-based convlstm network", 2023 IEEE Biomedical Circuits and Systems Conference (BioCAS), 2023.
9.
Christopher Choy, JunYoung Gwak and Silvio Savarese, "4d spatio-temporal convnets: Minkowski convolutional neural networks", Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3075-3084, 2019.
10.
Christopher Choy, Jaesik Park and Vladlen Koltun, "Fully convolutional geometric features", Proceedings of the IEEE International Conference on Computer Vision, pp. 8958-8966, 2019.
11.
Junyoung Chung, Çaglar Gülçehre, KyungHyun Cho and Yoshua Bengio, "Empirical evaluation of gated recurrent neural networks on sequence modeling", CoRR, 2014.
12.
Marcos V. Conde, Zhijun Lei, Wen Li, Ioannis Katsavounidis, Radu Timofte et al., "Real-time 4k super-resolution of compressed AVIF images. AIS 2024 challenge survey", Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024.
13.
Marcos V. Conde, Saman Zadtootaghaj, Nabajeet Barman, Radu Timofte et al., "AIS 2024 challenge on video quality assessment of user-generated content: Methods and results", Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024.
14.
Huiyu Duan, Guangtao Zhai, Xiongkuo Min, Zhaohui Che, Yi Fang, Xiaokang Yang, et al., "A dataset of eye movements for the children with autism spectrum disorder", Proceedings of the 10th ACM Multimedia Systems Conference, pp. 255-260, 2019.
15.
Ajoy S Fernandes, T Scott Murdison and Michael J Proulx, "Leveling the playing field: A comparative reevaluation of unmodified eye tracking as an input and interaction modality for VR", IEEE Transactions on Visualization and Computer Graphics, vol. 29, no. 5, pp. 2269-2279, 2023.
16.
Wolfgang Fuhl, Gjergji Kasneci and Enkelejda Kasneci, "Teyed: Over 20 million real-world eye images with pupil eyelid and iris 2D and 3D segmentations 2D and 3D landmarks 3D eyeball gaze vector and eye movement types", 2021 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), pp. 367-375, 2021.
17.
G. Gallego, T. Delbruck, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, et al., "Event-based vision: A survey", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 44, no. 01, pp. 154-180, 2022.
18.
Chang Gao, Tobi Delbruck and Shih-Chii Liu, "Spartus: A 9.4 top/s fpga-based lstm accelerator exploiting spatio-temporal sparsity", IEEE Transactions on Neural Networks and Learning Systems, 2022.
19.
Mathias Gehrig and Davide Scaramuzza, "Recurrent vision transformers for object detection with event cameras", Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 13884-13893, 2023.
20.
Albert Gu and Tri Dao, "Mamba: Linear-time sequence modeling with selective state spaces", 2023.
21.
Andrew Howard, Mark Sandler, Grace Chu, Liang-Chieh Chen, Bo Chen, Mingxing Tan, et al., "Searching for mobilenetv3", 2019.
22.
Diederik P Kingma and Jimmy Ba, "Adam: A method for stochastic optimization", 2014.
23.
Diederik P. Kingma and Jimmy Ba, "Adam: A Method for Stochastic Optimization", 2017.
24.
Xavier Lagorce, Garrick Orchard, Francesco Galluppi, Bertram E. Shi and Ryad B. Benosman, "Hots: A hierarchy of event-based time-surfaces for pattern recognition", IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 7, pp. 1346-1359, 2017.
25.
Kwang-Hyuk Lee and Leanne M. Williams, "Eye movement dysfunction as a biological marker of risk for schizophrenia", Australian & New Zealand Journal of Psychiatry, vol. 34, no. 1_suppl, pp. A91-A100, 2000.
26.
Gregor Lenz, Kenneth Chaney, Sumit Bam Shrestha, Omar Oubari, Serge Picaud and Guido Zarrella, "Tonic: event-based datasets and transformations", 2021, [online] Available: https://tonic.readthedocs.io.
27.
Chenghan Li, Christian Brandli, Raphael Berner, Hongjie Liu, Minhao Yang, Shih-Chii Liu, et al., "Design of an rgbw color vga rolling and global shutter dynamic and active-pixel vision sensor", 2015 IEEE International Symposium on Circuits and Systems (ISCAS), pp. 718-721, 2015.
28.
Patrick Lichtsteiner, Christoph Posch and Tobi Delbruck, "A 128× 128 120 db 15 μs latency asynchronous temporal contrast vision sensor", IEEE Journal of Solid-State Circuits, vol. 43, no. 2, pp. 566-576, 2008.
29.
Nico Messikommer, Daniel Gehrig, Antonio Loquercio and Davide Scaramuzza, "Event-based asynchronous sparse convolutional networks", 2020.
30.
Yan Ru Pei, Sasskia Brüers, Sébastien Crouzet, Douglas McLelland and Olivier Coenen, "A Lightweight Spatiotemporal Network for Online Eye Tracking with Event Camera", Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 2024.
Contact IEEE to Subscribe

References

References is not available for this document.