20240019931. METHODS AND SYSTEMS FOR EYE-GAZE LOCATION DETECTION AND ACCURATE COLLECTION OF EYE-GAZE DATA simplified abstract (Amplio Learning Technologies Ltd.)

From WikiPatents
Jump to navigation Jump to search

METHODS AND SYSTEMS FOR EYE-GAZE LOCATION DETECTION AND ACCURATE COLLECTION OF EYE-GAZE DATA

Organization Name

Amplio Learning Technologies Ltd.

Inventor(s)

Aric Katz of Haifa (IL)

METHODS AND SYSTEMS FOR EYE-GAZE LOCATION DETECTION AND ACCURATE COLLECTION OF EYE-GAZE DATA - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240019931 titled 'METHODS AND SYSTEMS FOR EYE-GAZE LOCATION DETECTION AND ACCURATE COLLECTION OF EYE-GAZE DATA

Simplified Explanation

The disclosed patent application describes methods and systems for detecting and collecting eye-gaze data using low-resolution, low-frame rate cameras. The method involves calibrating a machine learning model by constraining the area of interest to a single axis. The user is presented with lines of words and/or images on a screen, and the camera captures images of the user's eye-gaze while looking at these lines. The calibrated machine learning model is then applied to these images to detect the eye-gaze location on the screen with word-level accuracy. Additionally, the method combines eye-gaze and audio data collection by measuring the time difference between the eye-gaze location detection and the pronunciation of the first phoneme of a word by the user.

  • The patent application describes a method for accurately collecting eye-gaze data from low-resolution, low-frame rate cameras.
  • The method involves calibrating a machine learning model to constrain the area of interest to a single axis.
  • Users are presented with lines of words and/or images on a screen, and the camera captures images of their eye-gaze.
  • The calibrated machine learning model is then applied to these images to detect the eye-gaze location on the screen with word-level accuracy.
  • The method also combines eye-gaze and audio data collection by measuring the time difference between eye-gaze location detection and word pronunciation.

Potential applications of this technology:

  • Assistive technology for individuals with disabilities, allowing them to control devices or communicate using eye-gaze.
  • Market research and advertising, by analyzing where users focus their attention on screens.
  • Gaming and virtual reality, enabling more immersive experiences by tracking eye movements.
  • User experience research, to understand how users interact with interfaces and improve design.

Problems solved by this technology:

  • Accurate collection of eye-gaze data from low-resolution, low-frame rate cameras.
  • Calibration of machine learning models to improve eye-gaze detection accuracy.
  • Integration of eye-gaze and audio data collection for synchronized analysis.

Benefits of this technology:

  • Improved accessibility for individuals with disabilities.
  • Enhanced user experience in various applications.
  • More accurate market research and advertising targeting.
  • Better understanding of user behavior and interaction with interfaces.


Original Abstract Submitted

disclosed herein are methods and system for eye-gaze location detection and accurate collection of eye-gaze data from low resolution, low frame rate cameras. the method includes calibrating a machine learning model by constraining an area of interest to a single axis. presenting to the user one or more lines of words and/or images on a screen, capturing by a camera one or more images of the user's eye-gaze, looking at said lines of words and/or images, applying the calibrated machine learning model on the one or more images of the user's eye-gaze, constraining the area of interest to a single-axis, and detecting an eye-gaze location on the screen with a word-level accuracy. the method further includes combining eye-gaze and audio data collection by measuring a time difference between a first timestamp of a word eye-gaze location detection and a second timestamp of the word first phoneme pronunciation by a user.