20240036699. Devices, Methods, and Graphical User Interfaces for Processing Inputs to a Three-Dimensional Environment simplified abstract (Apple Inc.)

From WikiPatents
Jump to navigation Jump to search

Devices, Methods, and Graphical User Interfaces for Processing Inputs to a Three-Dimensional Environment

Organization Name

Apple Inc.

Inventor(s)

Mark A. Ebbole of San Francisco CA (US)

Leah M. Gum of Sunol CA (US)

Chia-Ling Li of San Jose CA (US)

Ashwin Kumar Asoka Kumar Shenoi of San Jose CA (US)

Devices, Methods, and Graphical User Interfaces for Processing Inputs to a Three-Dimensional Environment - A simplified explanation of the abstract

This abstract first appeared for US patent application 20240036699 titled 'Devices, Methods, and Graphical User Interfaces for Processing Inputs to a Three-Dimensional Environment

Simplified Explanation

The abstract of this patent application describes a computer system that can detect a user's gaze input directed towards different locations in an environment displayed on a screen. The system can provide information about the gaze input to user interface elements corresponding to the locations, but only if the user's hand is in a predefined configuration during the gaze input.

  • The computer system detects a user's gaze input directed to a first location in the displayed environment.
  • If the user's hand is in a predefined configuration during the gaze input, the system provides information about the gaze input to the user interface element corresponding to the first location.
  • If the gaze input moves to a different, second location while the user's hand is maintained in the predefined configuration, the system provides information about the gaze input to the user interface element corresponding to the second location.
  • If the user's hand is not in the predefined configuration during the gaze input, the system does not provide information about the gaze input to the user interface element corresponding to the first location.

Potential Applications:

  • Assistive technology for individuals with limited mobility or physical disabilities, allowing them to interact with a computer system using gaze input and hand gestures.
  • Virtual reality or augmented reality systems, where users can navigate and interact with virtual environments using gaze input and hand gestures.

Problems Solved:

  • Enabling users to interact with a computer system using gaze input and hand gestures, providing an alternative input method for individuals with limited mobility or physical disabilities.
  • Enhancing the user experience in virtual reality or augmented reality systems by allowing more natural and intuitive interaction with virtual environments.

Benefits:

  • Improved accessibility for individuals with limited mobility or physical disabilities, providing them with a means to interact with a computer system.
  • Enhanced user experience in virtual reality or augmented reality systems, allowing users to interact with virtual environments in a more natural and intuitive way.


Original Abstract Submitted

while a view of an environment is visible via a display generation component of a computer system, the computer system detects a gaze input directed to a first location, corresponding to a first user interface element, in the environment. in response to detecting the gaze input: if a user's hand is in a predefined configuration during the gaze input, the computer system: provides, to the first user interface element, information about the gaze input; and then, in response to detecting the gaze input moving to a different, second location in the environment while the user's hand is maintained in the predefined configuration, provides, to a second user interface element that corresponds to the second location, information about the gaze input. if the user's hand is not in the predefined configuration during the gaze input, the computer system forgoes providing, to the first user interface element, information about the gaze input.