18584782. DETERMINATION OF GAZE POSITION ON MULTIPLE SCREENS USING A MONOCULAR CAMERA simplified abstract (Intel Corporation)

From WikiPatents
Jump to navigation Jump to search

DETERMINATION OF GAZE POSITION ON MULTIPLE SCREENS USING A MONOCULAR CAMERA

Organization Name

Intel Corporation

Inventor(s)

Elad Sunray of Haifa (IL)

Dmitry Rudoy of Haifa (IL)

Noam Levy of Karmiel (IL)

DETERMINATION OF GAZE POSITION ON MULTIPLE SCREENS USING A MONOCULAR CAMERA - A simplified explanation of the abstract

This abstract first appeared for US patent application 18584782 titled 'DETERMINATION OF GAZE POSITION ON MULTIPLE SCREENS USING A MONOCULAR CAMERA

Simplified Explanation

The patent application describes systems and methods for real-time, efficient gaze position determination using a monocular camera on a consumer-grade laptop. This technology can be used for various human-computer interactions, such as gaming, augmented reality, and virtual reality.

Key Features and Innovation

  • Real-time, efficient gaze position determination using a monocular camera
  • Neural network for accurate gaze position estimation within four degrees of accuracy
  • Low computational complexity
  • Gaze position estimation across multiple screens
  • Detection of user's line-of-sight and intersection with a 2D screen

Potential Applications

This technology can be used in various scenarios, including different head poses, facial expressions, cameras, screens, and illumination scenarios. It can be applied in gaming, augmented reality, virtual reality, and user attention tracking.

Problems Solved

This technology addresses the need for accurate and real-time gaze position determination using a monocular camera on consumer-grade laptops. It solves the challenges of determining user gaze position across multiple screens and different scenarios.

Benefits

  • Accurate gaze position estimation within four degrees of accuracy
  • Real-time performance on consumer-grade laptops
  • Versatile applications in gaming, augmented reality, and virtual reality
  • Low computational complexity for efficient processing

Commercial Applications

  • Commercial applications include eye-tracking systems for gaming, user attention tracking in augmented reality applications, and gaze-based interactions in virtual reality environments. This technology can also be used in market research for analyzing user behavior and preferences.

Prior Art

Prior art related to this technology may include research on gaze tracking systems using monocular cameras, neural networks for gaze estimation, and applications of gaze tracking in human-computer interactions.

Frequently Updated Research

Research on improving the accuracy and speed of gaze position determination using monocular cameras, advancements in neural network algorithms for gaze estimation, and applications of gaze tracking in emerging technologies like augmented reality and virtual reality.

Questions about Gaze Position Determination

How does gaze tracking using a monocular camera differ from other eye-tracking technologies?

Gaze tracking using a monocular camera relies on estimating the user's line-of-sight and intersecting it with a 2D screen, while other technologies may use multiple cameras or infrared sensors for more precise tracking.

What are the potential limitations of gaze position determination using a monocular camera?

Potential limitations may include accuracy issues in low-light conditions, challenges in tracking rapid eye movements, and difficulties in determining gaze position with high precision.


Original Abstract Submitted

Systems and methods for real-time, efficient, monocular gaze position determination that can be performed in real-time on a consumer-grade laptop. Gaze tracking can be used for human-computer interactions, such as window selection, user attention on screen information, gaming, augmented reality, and virtual reality. Gaze position estimation from a monocular camera involves estimating the line-of-sight of a user and intersecting the line-of-sight with a two-dimensional (2D) screen. The system uses a neural network to determine gaze position within about four degrees of accuracy while maintaining very low computational complexity. The system can be used to determine gaze position across multiple screens, determining which screen a user is viewing as well as a gaze target area on the screen. There are many different scenarios in which a gaze position estimation system can be used, including different head poses, different facial expressions, different cameras, different screens, and various illumination scenarios.