MobileEYE: Deep Learning based Mobile Device Eye Tracking Solution for Dynamic Visuals

This research project focuses on developing a robust and efficient eye-tracking system for mobile devices specifically designed to accurately track gaze in video-based scenarios. Traditional eye-tracking methods, while compelling for static images, struggle with the dynamic nature of video content. To address this challenge, the researchers propose a deep learning-based approach that leverages state-of-the-art neural network architectures to extract both spatial and temporal features from video frames.

The key contributions of this research include:

  1. A novel dataset: Creating a new dataset capturing eye-tracking data from mobile users interacting with video content.
  2. Advanced deep learning models: Two deep learning models, CNN+LSTM and CNN+GRU, were developed to track gaze in video-based scenarios compared to existing methods.
  3. Edge computing prototype: Implementing an edge computing prototype that enables real-time eye-tracking on mobile devices by offloading the computational burden to edge devices optimizes resource utilization.
  4. Model optimization techniques: Model optimization techniques like pruning and quantization are applied to further improve the efficiency and accuracy of eye-tracking models for deployment on resource-constrained mobile devices.

By addressing these challenges and leveraging the advancements in deep learning and edge computing, this research aims to pave the way for more accurate, efficient, and user-friendly eye-tracking applications on mobile devices, opening up new possibilities for personalized and immersive user experiences.

Author

Add your vote

Please Register on the website to vote for this project.

Register