18226200. Devices, Methods, and Graphical User Interfaces for Processing Inputs to a Three-Dimensional Environment simplified abstract (Apple Inc.)

From WikiPatents
Jump to navigation Jump to search

Devices, Methods, and Graphical User Interfaces for Processing Inputs to a Three-Dimensional Environment

Organization Name

Apple Inc.

Inventor(s)

Mark A. Ebbole of San Francisco CA (US)

Leah M. Gum of Sunol CA (US)

Chia-Ling Li of San Jose CA (US)

Ashwin Kumar Asoka Kumar Shenoi of San Jose CA (US)

Devices, Methods, and Graphical User Interfaces for Processing Inputs to a Three-Dimensional Environment - A simplified explanation of the abstract

This abstract first appeared for US patent application 18226200 titled 'Devices, Methods, and Graphical User Interfaces for Processing Inputs to a Three-Dimensional Environment

Simplified Explanation

The abstract of the patent application describes a computer system that can detect a user's gaze input and provide information to user interface elements based on the location of the gaze input and the configuration of the user's hand.

  • The computer system detects a gaze input directed to a first location in the environment.
  • If the user's hand is in a predefined configuration during the gaze input, the computer system provides information about the gaze input to the first user interface element.
  • If the gaze input moves to a different, second location in the environment while the user's hand is maintained in the predefined configuration, the computer system provides information about the gaze input to a second user interface element corresponding to the second location.
  • If the user's hand is not in the predefined configuration during the gaze input, the computer system does not provide information about the gaze input to the first user interface element.

Potential applications of this technology:

  • Assistive technology for individuals with limited mobility or physical disabilities, allowing them to interact with computer systems using gaze input and hand gestures.
  • Virtual reality and augmented reality systems, where users can navigate and interact with virtual environments using their gaze and hand gestures.
  • Gaming systems that incorporate gaze tracking and hand gesture recognition for more immersive and intuitive gameplay.

Problems solved by this technology:

  • Provides a more natural and intuitive way for users to interact with computer systems, reducing the need for physical input devices such as keyboards or mice.
  • Enables individuals with physical disabilities to access and use computer systems more easily.
  • Enhances the user experience in virtual reality and augmented reality environments by allowing for more natural and immersive interactions.

Benefits of this technology:

  • Improved accessibility for individuals with physical disabilities, allowing them to use computer systems more effectively.
  • Enhanced user experience in virtual reality and augmented reality environments, making interactions more intuitive and immersive.
  • Potential for increased productivity and efficiency in various industries by providing a more natural and efficient user interface.


Original Abstract Submitted

While a view of an environment is visible via a display generation component of a computer system, the computer system detects a gaze input directed to a first location, corresponding to a first user interface element, in the environment. In response to detecting the gaze input: if a user's hand is in a predefined configuration during the gaze input, the computer system: provides, to the first user interface element, information about the gaze input; and then, in response to detecting the gaze input moving to a different, second location in the environment while the user's hand is maintained in the predefined configuration, provides, to a second user interface element that corresponds to the second location, information about the gaze input. If the user's hand is not in the predefined configuration during the gaze input, the computer system forgoes providing, to the first user interface element, information about the gaze input.